browserify-zlib

Travis CI Dependency Status js-standard-style

Description

Emulates Node’s zlib module for the browser. Can be used as a drop in replacement with Browserify and webpack.

The heavy lifting is done using pako. The code in this module is modeled closely after the code in the source of Node core to get as much compatability as possible.

API

https://nodejs.org/api/zlib.html

Not implemented

The following options/methods are not supported because pako does not support them yet.



get-caller-file

Build Status Build status

This is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8’s stack trace at the time it is invoked.

Inspired by http://stackoverflow.com/questions/13227489

note: this relies on Node/V8 specific APIs, as such other runtimes may not work

Installation

yarn add get-caller-file

Usage

Given:

// ./foo.js
const getCallerFile = require('get-caller-file');

module.exports = function() {
  return getCallerFile(); // figures out who called it
};
// index.js
const foo = require('./foo');

foo() // => /full/path/to/this/file/index.js

Options:



create-hmac

NPM Package Build Status Dependency status

js-standard-style

Node style HMACs for use in the browser, with native HMAC functions in node. API is the same as HMACs in node:

var createHmac = require('create-hmac')
var hmac = createHmac('sha224', Buffer.from('secret key'))
hmac.update('synchronous write') //optional encoding parameter
hmac.digest() // synchronously get result with optional encoding parameter

hmac.write('write to it as a stream')
hmac.end() //remember it's a stream
hmac.read() //only if you ended it as a stream though


is-error

Detect whether a value is an error

Example

var isError = require("is-error");

console.log(isError(new Error('hi'))) // true
console.log(isError({ message: 'hi' })) // false

Docs

var bool = isError(maybeErr)

is-error := (maybeErr: Any) => Boolean

isError returns a boolean. it will detect whether the argument is an error or not.

Installation

npm install is-error

Tests

npm test



buffer-equal-constant-time

Constant-time Buffer comparison for node.js. Should work with browserify too.

Build Status

  npm install buffer-equal-constant-time


Usage

  var bufferEq = require('buffer-equal-constant-time');

  var a = new Buffer('asdf');
  var b = new Buffer('asdf');
  if (bufferEq(a,b)) {
    // the same!
  } else {
    // different in at least one byte!
  }

If you’d like to install an .equal() method onto the node.js Buffer and SlowBuffer prototypes:

  require('buffer-equal-constant-time').install();

  var a = new Buffer('asdf');
  var b = new Buffer('asdf');
  if (a.equal(b)) {
    // the same!
  } else {
    // different in at least one byte!
  }

To get rid of the installed .equal() method, call .restore():

  require('buffer-equal-constant-time').restore();




lodash v4.17.20

The Lodash library exported as Node.js modules.

Installation

Using npm:

npm i -g npm
npm i --save lodash

In Node.js:

// Load the full build.
var _ = require('lodash');
// Load the core build.
var _ = require('lodash/core');
// Load the FP build for immutable auto-curried iteratee-first data-last methods.
var fp = require('lodash/fp');

// Load method categories.
var array = require('lodash/array');
var object = require('lodash/fp/object');

// Cherry-pick methods for smaller browserify/rollup/webpack bundles.
var at = require('lodash/at');
var curryN = require('lodash/fp/curryN');

See the package source for more details.

Note:
Install n_ for Lodash use in the Node.js < 6 REPL.

Tested in Chrome 74-75, Firefox 66-67, IE 11, Edge 18, Safari 11-12, & Node.js 8-12.
Automated browser & CI test runs are available.



md5.js

NPM Package Build Status Dependency status

js-standard-style

Node style md5 on pure JavaScript.

From NIST SP 800-131A: md5 is no longer acceptable where collision resistance is required such as digital signatures.

Example

var MD5 = require('md5.js')

console.log(new MD5().update('42').digest('hex'))
// => a1d0c6e83f027327d8461063f4ac58a6

var md5stream = new MD5()
md5stream.end('42')
console.log(md5stream.read().toString('hex'))
// => a1d0c6e83f027327d8461063f4ac58a6


DES.js



Brorand



Miller-Rabin



base64-js

base64-js does basic base64 encoding/decoding in pure JS.

build status

Many browsers already have base64 encoding/decoding functionality, but it is for text data, not all-purpose binary data.

Sometimes encoding/decoding binary data in the browser is useful, and that is what this module does.

install

With npm do:

npm install base64-js and var base64js = require('base64-js')

For use in web browsers do:

<script src="base64js.min.js"></script>

Get supported base64-js with the Tidelift Subscription

methods

base64js has three exposed functions, byteLength, toByteArray and fromByteArray, which both take a single argument.



Fast Diff Build Status

This is a simplified import of the excellent diff-match-patch library by Neil Fraser into the Node.js environment. The match and patch parts are removed, as well as all the extra diff options. What remains is incredibly fast diffing between two strings.

The diff function is an implementation of “An O(ND) Difference Algorithm and its Variations” (Myers, 1986) with the suggested divide and conquer strategy along with several optimizations Neil added.

var diff = require('fast-diff');

var good = 'Good dog';
var bad = 'Bad dog';

var result = diff(good, bad);
// [[-1, "Goo"], [1, "Ba"], [0, "d dog"]]

// Respect suggested edit location (cursor position), added in v1.1
diff('aaa', 'aaaa', 1)
// [[0, "a"], [1, "a"], [0, "aa"]]

// For convenience
diff.INSERT === 1;
diff.EQUAL === 0;
diff.DELETE === -1;


eslint-rule-docs

Actions Status NPM

Find documentation url for a given ESLint rule. Updated daily!

Install

$ npm install eslint-rule-docs

Usage

const getRuleUrl = require('eslint-rule-docs');

// Find url for core rules
getRuleUrl('no-undef');
// => { exactMatch: true, url: 'https://eslint.org/docs/rules/no-undef' }

// Find url for known plugins
getRuleUrl('react/sort-prop-types');
// => { exactMatch: true, url: 'https://github.com/yannickcr/eslint-plugin-react/blob/master/docs/rules/sort-prop-types.md' }

// If the plugin has no documentation, return repository url 
getRuleUrl('flowtype/semi');
// => { exactMatch: false, url: 'https://github.com/gajus/eslint-plugin-flowtype' }

// If the plugin is unknown, returns an empty object
getRuleUrl('unknown-foo/bar');
// => {}


atob

atob
btoa
unibabel.js
Sponsored by ppl

Uses Buffer to emulate the exact functionality of the browser’s atob.

Note: Unicode may be handled incorrectly (like the browser).

It turns base64-encoded ascii data back to binary.

(function () {
  "use strict";

  var atob = require('atob');
  var b64 = "SGVsbG8sIFdvcmxkIQ==";
  var bin = atob(b64);

  console.log(bin); // "Hello, World!"
}());

Check out unibabel.js



Changelog



Docs released under Creative Commons.

Caseless – wrap an object to set and get property with caseless semantics but also preserve caseing.

This library is incredibly useful when working with HTTP headers. It allows you to get/set/check for headers in a caseless manner while also preserving the caseing of headers the first time they are set.

Usage

var headers = {}
  , c = caseless(headers)
  ;
c.set('a-Header', 'asdf')
c.get('a-header') === 'asdf'

has(key)

Has takes a name and if it finds a matching header will return that header name with the preserved caseing it was set with.

c.has('a-header') === 'a-Header'

set(key, value[, clobber=true])

Set is fairly straight forward except that if the header exists and clobber is disabled it will add ','+value to the existing header.

c.set('a-Header', 'fdas')
c.set('a-HEADER', 'more', false)
c.get('a-header') === 'fdsa,more'

swap(key)

Swaps the casing of a header with the new one that is passed in.

var headers = {}
  , c = caseless(headers)
  ;
c.set('a-Header', 'fdas')
c.swap('a-HEADER')
c.has('a-header') === 'a-HEADER'
headers === {'a-HEADER': 'fdas'}


confusing-browser-globals

A curated list of browser globals that commonly cause confusion and are not recommended to use without an explicit window. qualifier.

Motivation

Some global variables in browser are likely to be used by people without the intent of using them as globals, such as status, name, event, etc.

For example:

handleClick() { // missing `event` argument
  this.setState({
    text: event.target.value // uses the `event` global: oops!
  });
}

This package exports a list of globals that are often used by mistake. You can feed this list to a static analysis tool like ESLint to prevent their usage without an explicit window. qualifier.

Installation

npm install --save confusing-browser-globals

Usage

If you use Create React App, you don’t need to configure anything, as this rule is already included in the default eslint-config-react-app preset.

If you maintain your own ESLint configuration, you can do this:

var restrictedGlobals = require('confusing-browser-globals');

module.exports = {
  rules: {
    'no-restricted-globals': ['error'].concat(restrictedGlobals),
  },
};


js-string-escape

Build Status

Escape any string to be a valid JavaScript string literal between double quotes or single quotes.

Installation

npm install js-string-escape

Example

If you need to generate JavaScript output, this library will help you safely put arbitrary data in JavaScript strings:

jsStringEscape = require('js-string-escape')

console.log('"' + jsStringEscape('Quotes (\", \'), newlines (\n), etc.') + '"')
// => "Quotes (\", \'), newlines (\n), etc."

In other words, given any string s, the following invariants hold:

eval('"' + jsStringEscape(s) + '"') === s
eval("'" + jsStringEscape(s) + "'") === s

These eval expressions are safe with untrusted strings s.

Non-strings will be cast to strings.

Compliance

This library has been checked against ECMAScript 5.1 and tested against all Unicode code points.



Overview

Adds support for the timers module to browserify.

Wait, isn’t it already supported in the browser?

The public methods of the timers module are:

and indeed, browsers support these already.

So, why does this exist?

The timers module also includes some private methods used in other built-in Node.js modules:

These are used to efficiently support a large quantity of timers with the same timeouts by creating only a few timers under the covers.

Node.js also offers the immediate APIs, which aren’t yet available cross-browser, so we polyfill those:

I need lots of timers and want to use linked list timers as Node.js does.

Linked lists are efficient when you have thousands (millions?) of timers with the same delay. Take a look at timers-browserify-full in this case.



node-asn1 is a library for encoding and decoding ASN.1 datatypes in pure JS. Currently BER encoding is supported; at some point I’ll likely have to do DER.

Usage

Mostly, if you’re actually needing to read and write ASN.1, you probably don’t need this readme to explain what and why. If you have no idea what ASN.1 is, see this: ftp://ftp.rsa.com/pub/pkcs/ascii/layman.asc

The source is pretty much self-explanatory, and has read/write methods for the common types out there.

Decoding

The following reads an ASN.1 sequence with a boolean.

var Ber = require('asn1').Ber;

var reader = new Ber.Reader(Buffer.from([0x30, 0x03, 0x01, 0x01, 0xff]));

reader.readSequence();
console.log('Sequence len: ' + reader.length);
if (reader.peek() === Ber.Boolean)
  console.log(reader.readBoolean());

Encoding

The following generates the same payload as above.

var Ber = require('asn1').Ber;

var writer = new Ber.Writer();

writer.startSequence();
writer.writeBoolean(true);
writer.endSequence();

console.log(writer.buffer);

Installation

npm install asn1

Bugs

See https://github.com/joyent/node-asn1/issues.



Merge Descriptors

NPM Version NPM Downloads Build Status Test Coverage

Merge objects using descriptors.

var thing = {
  get name() {
    return 'jon'
  }
}

var animal = {

}

merge(animal, thing)

animal.name === 'jon'

API

merge(destination, source)

Redefines destination’s descriptors with source’s.

merge(destination, source, false)

Defines source’s descriptors on destination if destination does not have a descriptor by the same name.

SPDX

From version 2.0 of the SPDX specification:

The Linux Foundation and the SPDX working groups are good people. Only they decide what “SPDX” means, as a standard and otherwise. I respect their work and their rights. You should, too.

This Package

I created this package by copying exception identifiers out of the SPDX specification. That work was mechanical, routine, and required no creativity whatsoever. - Kyle Mitchell, package author

United States users concerned about intellectual property may wish to discuss the following Supreme Court decisions with their attorneys:



Array Flatten

NPM version NPM downloads Build status Test coverage

Flatten an array of nested arrays into a single flat array. Accepts an optional depth.

Installation

npm install array-flatten --save

Usage

var flatten = require('array-flatten')

flatten([1, [2, [3, [4, [5], 6], 7], 8], 9])
//=> [1, 2, 3, 4, 5, 6, 7, 8, 9]

flatten([1, [2, [3, [4, [5], 6], 7], 8], 9], 2)
//=> [1, 2, 3, [4, [5], 6], 7, 8, 9]

(function () {
  flatten(arguments) //=> [1, 2, 3]
})(1, [2, 3])


unpipe

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Unpipe a stream from all destinations.

Installation

$ npm install unpipe

API

var unpipe = require('unpipe')

unpipe(stream)

Unpipes all destinations from a given stream. With stream 2+, this is equivalent to stream.unpipe(). When used with streams 1 style streams (typically Node.js 0.8 and below), this module attempts to undo the actions done in stream.pipe(dest).



json-stringify-safe

Like JSON.stringify, but doesn’t throw on circular references.

Usage

Takes the same arguments as JSON.stringify.

var stringify = require('json-stringify-safe');
var circularObj = {};
circularObj.circularRef = circularObj;
circularObj.list = [ circularObj, circularObj ];
console.log(stringify(circularObj, null, 2));

Output:

{
  "circularRef": "[Circular]",
  "list": [
    "[Circular]",
    "[Circular]"
  ]
}

Details

stringify(obj, serializer, indent, decycler)

The first three arguments are the same as to JSON.stringify. The last is an argument that’s only used when the object has been seen already.

The default decycler function returns the string '[Circular]'. If, for example, you pass in function(k,v){} (return nothing) then it will prune cycles. If you pass in function(k,v){ return {foo: 'bar'}}, then cyclical objects will always be represented as {"foo":"bar"} in the result.

stringify.getSerialize(serializer, decycler)

Returns a serializer that can be used elsewhere. This is the actual function that’s passed to JSON.stringify.

Note that the function returned from getSerialize is stateful for now, so do not use it more than once.



Deep Extend

Recursive object extending.

Build Status

NPM

Install

$ npm install deep-extend

Usage

var deepExtend = require('deep-extend');
var obj1 = {
  a: 1,
  b: 2,
  d: {
    a: 1,
    b: [],
    c: { test1: 123, test2: 321 }
  },
  f: 5,
  g: 123,
  i: 321,
  j: [1, 2]
};
var obj2 = {
  b: 3,
  c: 5,
  d: {
    b: { first: 'one', second: 'two' },
    c: { test2: 222 }
  },
  e: { one: 1, two: 2 },
  f: [],
  g: (void 0),
  h: /abc/g,
  i: null,
  j: [3, 4]
};

deepExtend(obj1, obj2);

console.log(obj1);
/*
{ a: 1,
  b: 3,
  d:
   { a: 1,
     b: { first: 'one', second: 'two' },
     c: { test1: 123, test2: 222 } },
  f: [],
  g: undefined,
  c: 5,
  e: { one: 1, two: 2 },
  h: /abc/g,
  i: null,
  j: [3, 4] }
*/

Unit testing

$ npm test

Changelog

CHANGELOG.md

Any issues?

Please, report about issues here.



is-ci

Returns true if the current environment is a Continuous Integration server.

Please open an issue if your CI server isn’t properly detected :)

npm Build status js-standard-style

Installation

npm install is-ci --save

Programmatic Usage

const isCI = require('is-ci')

if (isCI) {
  console.log('The code is running on a CI server')
}

CLI Usage

For CLI usage you need to have the is-ci executable in your PATH. There’s a few ways to do that:

is-ci && echo "This is a CI server"

Refer to ci-info docs for all supported CI’s



utils-merge

Version Build Quality Coverage Dependencies

Merges the properties from a source object into a destination object.

Install

$ npm install utils-merge

Usage

var a = { foo: 'bar' }
  , b = { bar: 'baz' };

merge(a, b);
// => { foo: 'bar', bar: 'baz' }

Sponsor



eslint-import-resolver-node

npm

Default Node-style module resolution plugin for eslint-plugin-import.

Published separately to allow pegging to a specific version in case of breaking changes.

Config is passed directly through to resolve as options:

settings:
  import/resolver:
    node:
      extensions:
        # if unset, default is just '.js', but it must be re-added explicitly if set
        - .js
        - .jsx
        - .es6
        - .coffee

      paths:
        # an array of absolute paths which will also be searched
        # think NODE_PATH
        - /usr/local/share/global_modules

      # this is technically for identifying `node_modules` alternate names
      moduleDirectory:

        - node_modules # defaults to 'node_modules', but...
        - bower_components

        - project/src  # can add a path segment here that will act like
                       # a source root, for in-project aliasing (i.e.
                       # `import MyStore from 'stores/my-store'`)

or to use the default options:

settings:
  import/resolver: node


is-core-module Version Badge

npm badge

Is this specifier a node.js core module? Optionally provide a node version to check; defaults to the current node version.

Example

var isCore = require('is-core-module');
var assert = require('assert');
assert(isCore('fs'));
assert(!isCore('butts'));

Tests

Clone the repo, npm install, and run npm test



signal-exit

Build Status Coverage NPM version Standard Version

When you want to fire an event no matter how a process exits:

Use signal-exit.

var onExit = require('signal-exit')

onExit(function (code, signal) {
  console.log('process exited!')
})

API

var remove = onExit(function (code, signal) {}, options)

The return value of the function is a function that will remove the handler.

Note that the function only fires for signals if the signal would cause the proces to exit. That is, there are no other listeners, and it is a fatal signal.

Options



which

Like the unix which utility.

Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so hash -r is not needed when the PATH changes.

USAGE

var which = require('which')

// async usage
which('node', function (er, resolvedPath) {
  // er is returned if no "node" is found on the PATH
  // if it is found, then the absolute path to the exec is returned
})

// or promise
which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... })

// sync usage
// throws if not found
var resolved = which.sync('node')

// if nothrow option is used, returns null if not found
resolved = which.sync('node', {nothrow: true})

// Pass options to override the PATH and PATHEXT environment vars.
which('node', { path: someOtherPath }, function (er, resolved) {
  if (er)
    throw er
  console.log('found at %j', resolved)
})

CLI USAGE

Same as the BSD which(1) binary.

usage: which [-as] program ...

OPTIONS

You may pass an options object as the second argument.



extsprintf: extended POSIX-style sprintf

Stripped down version of s[n]printf(3c). We make a best effort to throw an exception when given a format string we don’t understand, rather than ignoring it, so that we won’t break existing programs if/when we go implement the rest of this.

This implementation currently supports specifying

Everything else is currently unsupported, most notably: precision, unsigned numbers, non-decimal numbers, and characters.

Besides the usual POSIX conversions, this implementation supports:



Example

First, install it:

# npm install extsprintf

Now, use it:

var mod_extsprintf = require('extsprintf');
console.log(mod_extsprintf.sprintf('hello %25s', 'world'));

outputs:

hello                     world


Also supported

printf: same args as sprintf, but prints the result to stdout

fprintf: same args as sprintf, preceded by a Node stream. Prints the result to the given stream.



process

require('process'); just like any other module.

Works in node.js and browsers via the browser.js shim provided with the module.

browser implementation

The goal of this module is not to be a full-fledged alternative to the builtin process module. This module mostly exists to provide the nextTick functionality and little more. We keep this module lean because it will often be included by default by tools like browserify when it detects a module has used the process global.

It also exposes a “browser” member (i.e. process.browser) which is true in this implementation but undefined in node. This can be used in isomorphic code that adjusts it’s behavior depending on which environment it’s running in.

If you are looking to provide other process methods, I suggest you monkey patch them onto the process global in your app. A list of user created patches is below.

package manager notes

If you are writing a bundler to package modules for client side use, make sure you use the browser field hint in package.json.

See https://gist.github.com/4339901 for details.

The browserify module will properly handle this field when bundling your files.



common-path-prefix

Computes the longest prefix string that is common to each path, excluding the base component. Tested with Node.js 8 and above.

Installation

npm install common-path-prefix

Usage

The module has one default export, the commonPathPrefix function:

const commonPathPrefix = require('common-path-prefix')

Call commonPathPrefix() with an array of paths (strings) and an optional separator character:

const paths = ['templates/main.handlebars', 'templates/_partial.handlebars']

commonPathPrefix(paths, '/') // returns 'templates/'

If the separator is not provided the first / or \ found in any of the paths is used. Otherwise the platform-default value is used:

commonPathPrefix(['templates/main.handlebars', 'templates/_partial.handlebars']) // returns 'templates/'
commonPathPrefix(['templates\\main.handlebars', 'templates\\_partial.handlebars']) // returns 'templates\\'

You can provide any separator, for example:

commonPathPrefix(['foo$bar', 'foo$baz'], '$') // returns 'foo$''

An empty string is returned if no common prefix exists:

commonPathPrefix(['foo/bar', 'baz/qux']) // returns ''
commonPathPrefix(['foo/bar']) // returns ''

Note that the following does have a common prefix:

commonPathPrefix(['/foo/bar', '/baz/qux']) // returns '/'


isexe

Minimal module to check if a file is executable, and a normal file.

Uses fs.stat and tests against the PATHEXT environment variable on Windows.

USAGE

var isexe = require('isexe')
isexe('some-file-name', function (err, isExe) {
  if (err) {
    console.error('probably file does not exist or something', err)
  } else if (isExe) {
    console.error('this thing can be run')
  } else {
    console.error('cannot be run')
  }
})

// same thing but synchronous, throws errors
var isExe = isexe.sync('some-file-name')

// treat errors as just "not executable"
isexe('maybe-missing-file', { ignoreErrors: true }, callback)
var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true })

API

isexe(path, [options], [callback])

Check if the path is executable. If no callback provided, and a global Promise object is available, then a Promise will be returned.

Will raise whatever errors may be raised by fs.stat, unless options.ignoreErrors is set to true.

isexe.sync(path, [options])

Same as isexe but returns the value and throws any errors raised.

Options

#is-symbol Version Badge

npm badge

browser support

Is this an ES6 Symbol value?

Example

var isSymbol = require('is-symbol');
assert(!isSymbol(function () {}));
assert(!isSymbol(null));
assert(!isSymbol(function* () { yield 42; return Infinity; });

assert(isSymbol(Symbol.iterator));
assert(isSymbol(Symbol('foo')));
assert(isSymbol(Symbol.for('foo')));
assert(isSymbol(Object(Symbol('foo'))));

Tests

Simply clone the repo, npm install, and run npm test



ci-parallel-vars

Get CI environment variables for parallelizing builds

Install

yarn add ci-parallel-vars

Usage

const ciParallelVars = require('ci-parallel-vars');

console.log(ciParallelVars); // { index: 3, total: 10 } || null

If you want to add support for another pair, please open a pull request and add them to index.js and to this list.

One of these pairs must both be defined as numbers or ci-parallel-vars will be null.



function-bind

Implementation of function.prototype.bind

Example

I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :(

Function.prototype.bind = require("function-bind")

Installation

npm install function-bind



SocketPool.swf

Some special networking features can optionally use a Flash component. Building the output SWF file requires the Flex SDK. A pre-built component is included: swf/SocketPool.swf.

Building the output SWF requires the mxmlc tool from the Flex SDK. If that tools is already installed then look in the package.json file for the commands to rebuild it. If you need the SDK installed, there is a npm module that installs it:

npm install

To build a regular component:

npm run build

Additional debug support can be built in with the following:

npm run build-debug

Policy Server

Flash support requires the use of a Policy Server.

Apache Flash Socket Policy Module

mod_fsp provides an Apache module that can serve up a Flash Socket Policy. See mod_fsp/README for more details. This module makes it easy to modify an Apache server to allow cross domain requests to be made to it.

Simple Python Policy Server

policyserver.py provides a very simple test policy server.

Simple Node.js Policy Server

policyserver.js provides a very simple test policy server. If a server is needed for production environments, please use another option such as perhaps nodejs_socket_policy_server.



ESLint Scope

ESLint Scope is the ECMAScript scope analyzer used in ESLint. It is a fork of escope.

Usage

Install:

npm i eslint-scope --save

Example:

var eslintScope = require('eslint-scope');
var espree = require('espree');
var estraverse = require('estraverse');

var ast = espree.parse(code);
var scopeManager = eslintScope.analyze(ast);

var currentScope = scopeManager.acquire(ast);   // global scope

estraverse.traverse(ast, {
    enter: function(node, parent) {
        // do stuff

        if (/Function/.test(node.type)) {
            currentScope = scopeManager.acquire(node);  // get current function scope
        }
    },
    leave: function(node, parent) {
        if (/Function/.test(node.type)) {
            currentScope = currentScope.upper;  // set to parent scope
        }

        // do stuff
    }
});

Contributing

Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the ESLint Contributor Guidelines, so please be sure to read them before contributing. If you’re not sure where to dig in, check out the issues.

Build Commands



hash.js Build Status

Just a bike-shed.

Install

npm install hash.js

Usage

var hash = require('hash.js')
hash.sha256().update('abc').digest('hex')

Selective hash usage

var sha512 = require('hash.js/lib/hash/sha/512');
sha512().update('abc').digest('hex');


eslint-utils

npm version Downloads/month Build Status Coverage Status Dependency Status

🏁 Goal

This package provides utility functions and classes for make ESLint custom rules.

For examples:

📖 Usage

See documentation.

📰 Changelog

See releases.

❤️ Contributing

Welcome contributing!

Please use GitHub’s Issues/PRs.

Development Tools



node-http-signature

node-http-signature is a node.js library that has client and server components for Joyent’s HTTP Signature Scheme.

Usage

Note the example below signs a request with the same key/cert used to start an HTTP server. This is almost certainly not what you actually want, but is just used to illustrate the API calls; you will need to provide your own key management in addition to this library.

Client

var fs = require('fs');
var https = require('https');
var httpSignature = require('http-signature');

var key = fs.readFileSync('./key.pem', 'ascii');

var options = {
  host: 'localhost',
  port: 8443,
  path: '/',
  method: 'GET',
  headers: {}
};

// Adds a 'Date' header in, signs it, and adds the
// 'Authorization' header in.
var req = https.request(options, function(res) {
  console.log(res.statusCode);
});


httpSignature.sign(req, {
  key: key,
  keyId: './cert.pem'
});

req.end();

Server

var fs = require('fs');
var https = require('https');
var httpSignature = require('http-signature');

var options = {
  key: fs.readFileSync('./key.pem'),
  cert: fs.readFileSync('./cert.pem')
};

https.createServer(options, function (req, res) {
  var rc = 200;
  var parsed = httpSignature.parseRequest(req);
  var pub = fs.readFileSync(parsed.keyId, 'ascii');
  if (!httpSignature.verifySignature(parsed, pub))
    rc = 401;

  res.writeHead(rc);
  res.end();
}).listen(8443);

Installation

npm install http-signature

Bugs

See https://github.com/joyent/node-http-signature/issues.



pbkdf2

NPM Package Build Status Dependency status

js-standard-style

This library provides the functionality of PBKDF2 with the ability to use any supported hashing algorithm returned from crypto.getHashes()

Usage

var pbkdf2 = require('pbkdf2')
var derivedKey = pbkdf2.pbkdf2Sync('password', 'salt', 1, 32, 'sha512')

...

For more information on the API, please see the relevant Node documentation.

For high performance, use the async variant (pbkdf2.pbkdf2), not pbkdf2.pbkdf2Sync, this variant has the oppurtunity to use window.crypto.subtle when browserified.

Credits

This module is a derivative of cryptocoinjs/pbkdf2-sha256, so thanks to JP Richardson for laying the ground work.

Thank you to FangDun Cai for donating the package name on npm, if you’re looking for his previous module it is located at fundon/pbkdf2.



jsbn: javascript big number

Tom Wu’s Original Website

I felt compelled to put this on github and publish to npm. I haven’t tested every other big integer library out there, but the few that I have tested in comparison to this one have not even come close in performance. I am aware of the bi module on npm, however it has been modified and I wanted to publish the original without modifications. This is jsbn and jsbn2 from Tom Wu’s original website above, with the modular pattern applied to prevent global leaks and to allow for use with node.js on the server side.

usage

var BigInteger = require('jsbn');

var a = new BigInteger('91823918239182398123');
alert(a.bitLength()); // 67

API

bi.toString()

returns the base-10 number as a string

bi.negate()

returns a new BigInteger equal to the negation of bi

bi.abs

returns new BI of absolute value

bi.compareTo

bi.bitLength

bi.mod

bi.modPowInt

bi.clone

bi.intValue

bi.byteValue

bi.shortValue

bi.signum

bi.toByteArray

bi.equals

bi.min

bi.max

bi.and

bi.or

bi.xor

bi.andNot

bi.not

bi.shiftLeft

bi.shiftRight

bi.getLowestSetBit

bi.bitCount

bi.testBit

bi.setBit

bi.clearBit

bi.flipBit

bi.add

bi.subtract

bi.multiply

bi.divide

bi.remainder

bi.divideAndRemainder

bi.modPow

bi.modInverse

bi.pow

bi.gcd

bi.isProbablePrime

#is-regex Version Badge

npm badge

browser support

Is this value a JS regex? This module works cross-realm/iframe, and despite ES6 @@toStringTag.

Example

var isRegex = require('is-regex');
var assert = require('assert');

assert.notOk(isRegex(undefined));
assert.notOk(isRegex(null));
assert.notOk(isRegex(false));
assert.notOk(isRegex(true));
assert.notOk(isRegex(42));
assert.notOk(isRegex('foo'));
assert.notOk(isRegex(function () {}));
assert.notOk(isRegex([]));
assert.notOk(isRegex({}));

assert.ok(isRegex(/a/g));
assert.ok(isRegex(new RegExp('a', 'g')));

Tests

Simply clone the repo, npm install, and run npm test

Browser-friendly inheritance fully compatible with standard node.js inherits.

This package exports standard inherits from node.js util module in node environment, but also provides alternative browser-friendly implementation through browser field. Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of util. It also has a shim for old browsers with no Object.create support.

While keeping you sure you are using standard inherits implementation in node.js environment, it allows bundlers such as browserify to not include full util package to your client code if all you need is just inherits function. It worth, because browser shim for util package is large and inherits is often the single function you need from it.

It’s recommended to use this package instead of require('util').inherits for any code that has chances to be used not only in node.js but in browser too.

usage

var inherits = require('inherits');
// then use exactly as the standard one

note on version ~1.0

Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js inherits.

If you are using version ~1.0 and planning to switch to ~2.0, be careful:

Browser-friendly inheritance fully compatible with standard node.js inherits.

This package exports standard inherits from node.js util module in node environment, but also provides alternative browser-friendly implementation through browser field. Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of util. It also has a shim for old browsers with no Object.create support.

While keeping you sure you are using standard inherits implementation in node.js environment, it allows bundlers such as browserify to not include full util package to your client code if all you need is just inherits function. It worth, because browser shim for util package is large and inherits is often the single function you need from it.

It’s recommended to use this package instead of require('util').inherits for any code that has chances to be used not only in node.js but in browser too.

usage

var inherits = require('inherits');
// then use exactly as the standard one

note on version ~1.0

Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js inherits.

If you are using version ~1.0 and planning to switch to ~2.0, be careful:

Browser-friendly inheritance fully compatible with standard node.js inherits.

This package exports standard inherits from node.js util module in node environment, but also provides alternative browser-friendly implementation through browser field. Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of util. It also has a shim for old browsers with no Object.create support.

While keeping you sure you are using standard inherits implementation in node.js environment, it allows bundlers such as browserify to not include full util package to your client code if all you need is just inherits function. It worth, because browser shim for util package is large and inherits is often the single function you need from it.

It’s recommended to use this package instead of require('util').inherits for any code that has chances to be used not only in node.js but in browser too.

usage

var inherits = require('inherits');
// then use exactly as the standard one

note on version ~1.0

Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js inherits.

If you are using version ~1.0 and planning to switch to ~2.0, be careful:



find-root

recursively find the closest package.json

Build Status

usage

Say you want to check if the directory name of a project matches its module name in package.json:

const path = require('path')
const findRoot = require('find-root')

// from a starting directory, recursively search for the nearest
// directory containing package.json
const root = findRoot('/Users/jsdnxx/Code/find-root/tests')
// => '/Users/jsdnxx/Code/find-root'

const dirname = path.basename(root)
console.log('is it the same?')
console.log(dirname === require(path.join(root, 'package.json')).name)

You can also pass in a custom check function (by default, it checks for the existence of package.json in a directory). In this example, we traverse up to find the root of a git repo:

const fs = require('fs')

const gitRoot = findRoot('/Users/jsdnxx/Code/find-root/tests', function (dir) {
  return fs.existsSync(path.resolve(dir, '.git'))
})

api

findRoot: (startingPath : string, check?: (dir: string) => boolean) => string

Returns the path for the nearest directory to startingPath containing a package.json file, eg /foo/module.

If check is provided, returns the path for the closest parent directory where check returns true.

Throws an error if no package.json is found at any level in the startingPath.

installation

> npm install find-root

running the tests

From package root:

> npm install
> npm test

contributors



forwarded

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Parse HTTP X-Forwarded-For header

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install forwarded

API

var forwarded = require('forwarded')

forwarded(req)

var addresses = forwarded(req)

Parse the X-Forwarded-For header from the request. Returns an array of the addresses, including the socket address for the req, in reverse order (i.e. index 0 is the socket address and the last index is the furthest address, typically the end-user).

Testing

$ npm test


crypto-browserify

A port of node’s crypto module to the browser.

Build Status js-standard-style Sauce Test Status

The goal of this module is to reimplement node’s crypto module, in pure javascript so that it can run in the browser.

Here is the subset that is currently implemented:

todo

these features from node’s crypto are still unimplemented.

contributions

If you are interested in writing a feature, please implement as a new module, which will be incorporated into crypto-browserify as a dependency.

All deps must be compatible with node’s crypto (generate example inputs and outputs with node, and save base64 strings inside JSON, so that tests can run in the browser. see sha.js

Crypto is extra serious so please do not hesitate to review the code, and post comments if you do.



HAR Validator

Extremely fast HTTP Archive (HAR) validator using JSON Schema.

Install

npm install har-validator

CLI Usage

Please refer to har-cli for more info.

API

Note: as of v2.0.0 this module defaults to Promise based API. For backward compatibility with v1.x an async/callback API is also provided



ieee754 travis npm downloads javascript style guide

saucelabs

Read/write IEEE754 floating point numbers from/to a Buffer or array-like object.

install

npm install ieee754

methods

var ieee754 = require('ieee754')

The ieee754 object has the following functions:

ieee754.read = function (buffer, offset, isLE, mLen, nBytes)
ieee754.write = function (buffer, value, offset, isLE, mLen, nBytes)

The arguments mean the following:

what is ieee754?

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation. Read more.



mute-stream

Bytes go in, but they don’t come out (when muted).

This is a basic pass-through stream, but when muted, the bytes are silently dropped, rather than being passed through.

Usage

var MuteStream = require('mute-stream')

var ms = new MuteStream(options)

ms.pipe(process.stdout)
ms.write('foo') // writes 'foo' to stdout
ms.mute()
ms.write('bar') // does not write 'bar'
ms.unmute()
ms.write('baz') // writes 'baz' to stdout

// can also be used to mute incoming data
var ms = new MuteStream
input.pipe(ms)

ms.on('data', function (c) {
  console.log('data: ' + c)
})

input.emit('data', 'foo') // logs 'foo'
ms.mute()
input.emit('data', 'bar') // does not log 'bar'
ms.unmute()
input.emit('data', 'baz') // logs 'baz'

Options

All options are optional.

ms.mute()

Set muted to true. Turns .write() into a no-op.

ms.unmute()

Set muted to false

ms.isTTY

True if the pipe destination is a TTY, or if the incoming pipe source is a TTY.

Other stream methods…

The other standard readable and writable stream methods are all available. The MuteStream object acts as a facade to its pipe source and destination.



util-deprecate

The Node.js util.deprecate() function with browser support

In Node.js, this module simply re-exports the util.deprecate() function.

In the web browser (i.e. via browserify), a browser-specific implementation of the util.deprecate() function is used.

API

A deprecate() function is the only thing exposed by this module.

// setup:
exports.foo = deprecate(foo, 'foo() is deprecated, use bar() instead');


// users see:
foo();
// foo() is deprecated, use bar() instead
foo();
foo();


constants-browserify

Node’s constants module for the browser.

downloads

Usage

To use with browserify cli:

$ browserify -r constants:constants-browserify script.js

To use with browserify api:

browserify()
  .require('constants-browserify', { expose: 'constants' })
  .add(__dirname + '/script.js')
  .bundle()
  // ...

Installation

With npm do

$ npm install constants-browserify

Port of the OpenBSD bcrypt_pbkdf function to pure Javascript. npm-ified version of Devi Mandiri’s port, with some minor performance improvements. The code is copied verbatim (and un-styled) from Devi’s work.

API

bcrypt_pbkdf.pbkdf(pass, passlen, salt, saltlen, key, keylen, rounds)

Derive a cryptographic key of arbitrary length from a given password and salt, using the OpenBSD bcrypt_pbkdf function. This is a combination of Blowfish and SHA-512.

See this article for further information.

Parameters:

bcrypt_pbkdf.hash(sha2pass, sha2salt, out)

Calculate a Blowfish hash, given SHA2-512 output of a password and salt. Used as part of the inner round function in the PBKDF.

Parameters:



Methods

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

HTTP verbs that Node.js core’s HTTP parser supports.

This module provides an export that is just like http.METHODS from Node.js core, with the following differences:

Install

$ npm install methods

API

var methods = require('methods')

methods

This is an array of lower-cased method names that Node.js supports. If Node.js provides the http.METHODS export, then this is the same array lower-cased, otherwise it is a snapshot of the verbs from Node.js 0.10.



end-of-stream

A node module that calls a callback when a readable/writable/duplex stream has completed or failed.

npm install end-of-stream

Build status

Usage

Simply pass a stream and a callback to the eos. Both legacy streams, streams2 and stream3 are supported.

var eos = require('end-of-stream');

eos(readableStream, function(err) {
  // this will be set to the stream instance
    if (err) return console.log('stream had an error or closed early');
    console.log('stream has ended', this === readableStream);
});

eos(writableStream, function(err) {
    if (err) return console.log('stream had an error or closed early');
    console.log('stream has finished', this === writableStream);
});

eos(duplexStream, function(err) {
    if (err) return console.log('stream had an error or closed early');
    console.log('stream has ended and finished', this === duplexStream);
});

eos(duplexStream, {readable:false}, function(err) {
    if (err) return console.log('stream had an error or closed early');
    console.log('stream has finished but might still be readable');
});

eos(duplexStream, {writable:false}, function(err) {
    if (err) return console.log('stream had an error or closed early');
    console.log('stream has ended but might still be writable');
});

eos(readableStream, {error:false}, function(err) {
    // do not treat emit('error', err) as a end-of-stream
});

end-of-stream is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.



hash-base

NPM Package Build Status Dependency status

js-standard-style

Abstract base class to inherit from if you want to create streams implementing the same API as node crypto Hash (for Cipher / Decipher check crypto-browserify/cipher-base).

Example

const HashBase = require('hash-base')
const inherits = require('inherits')

// our hash function is XOR sum of all bytes
function MyHash () {
  HashBase.call(this, 1) // in bytes

  this._sum = 0x00
}

inherits(MyHash, HashBase)

MyHash.prototype._update = function () {
  for (let i = 0; i < this._block.length; ++i) this._sum ^= this._block[i]
}

MyHash.prototype._digest = function () {
  return this._sum
}

const data = Buffer.from([ 0x00, 0x42, 0x01 ])
const hash = new MyHash().update(data).digest()
console.log(hash) // => 67

You also can check source code or crypto-browserify/md5.js



toidentifier

NPM Version NPM Downloads Build Status Test Coverage

Convert a string of words to a JavaScript identifier

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install toidentifier

Example

var toIdentifier = require('toidentifier')

console.log(toIdentifier('Bad Request'))
// => "BadRequest"

API

This CommonJS module exports a single default function: toIdentifier.

toIdentifier(string)

Given a string as the argument, it will be transformed according to the following rules and the new string will be returned:

  1. Split into words separated by space characters (0x20).
  2. Upper case the first character of each word.
  3. Join the words together with no separator.
  4. Remove all non-word ([0-9a-z_]) characters.



pump

pump is a small node module that pipes streams together and destroys all of them if one of them closes.

npm install pump

build status

What problem does it solve?

When using standard source.pipe(dest) source will not be destroyed if dest emits close or an error. You are also not able to provide a callback to tell when then pipe has finished.

pump does these two things for you

Usage

Simply pass the streams you want to pipe together to pump and add an optional callback

var pump = require('pump')
var fs = require('fs')

var source = fs.createReadStream('/dev/random')
var dest = fs.createWriteStream('/dev/null')

pump(source, dest, function(err) {
  console.log('pipe finished', err)
})

setTimeout(function() {
  dest.destroy() // when dest is closed pump will destroy source
}, 1000)

You can use pump to pipe more than two streams together as well

var transform = someTransformStream()

pump(source, transform, anotherTransform, dest, function(err) {
  console.log('pipe finished', err)
})

If source, transform, anotherTransform or dest closes all of them will be destroyed.

Similarly to stream.pipe(), pump() returns the last stream passed in, so you can do:

return pump(s1, s2) // returns s2

If you want to return a stream that combines both s1 and s2 to a single stream use pumpify instead.

pump is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.



hmac-drbg

Build Status NPM version

JS-only implementation of HMAC DRBG.

Usage

const DRBG = require('hmac-drbg');
const hash = require('hash.js');

const d = new DRBG({
  hash: hash.sha256,
  entropy: '0123456789abcdef',
  nonce: '0123456789abcdef',
  pers: '0123456789abcdef' /* or `null` */
});

d.generate(32, 'hex');


is-buffer travis npm downloads javascript style guide

Determine if an object is a Buffer (including the browserify Buffer)

saucelabs

Why not use Buffer.isBuffer?

This module lets you check if an object is a Buffer without using Buffer.isBuffer (which includes the whole buffer module in browserify).

It’s future-proof and works in node too!

install

npm install is-buffer

usage

var isBuffer = require('is-buffer')

isBuffer(new Buffer(4)) // true

isBuffer(undefined) // false
isBuffer(null) // false
isBuffer('') // false
isBuffer(true) // false
isBuffer(false) // false
isBuffer(0) // false
isBuffer(1) // false
isBuffer(1.0) // false
isBuffer('string') // false
isBuffer({}) // false
isBuffer(function foo () {}) // false


is-date-object Version Badge

npm badge

browser support

Is this value a JS Date object? This module works cross-realm/iframe, and despite ES6 @@toStringTag.

Example

var isDate = require('is-date-object');
var assert = require('assert');

assert.notOk(isDate(undefined));
assert.notOk(isDate(null));
assert.notOk(isDate(false));
assert.notOk(isDate(true));
assert.notOk(isDate(42));
assert.notOk(isDate('foo'));
assert.notOk(isDate(function () {}));
assert.notOk(isDate([]));
assert.notOk(isDate({}));
assert.notOk(isDate(/a/g));
assert.notOk(isDate(new RegExp('a', 'g')));

assert.ok(isDate(new Date()));

Tests

Simply clone the repo, npm install, and run npm test



is-negative-zero Version Badge

npm badge

Is this value negative zero? === will lie to you.

Example

var isNegativeZero = require('is-negative-zero');
var assert = require('assert');

assert.notOk(isNegativeZero(undefined));
assert.notOk(isNegativeZero(null));
assert.notOk(isNegativeZero(false));
assert.notOk(isNegativeZero(true));
assert.notOk(isNegativeZero(0));
assert.notOk(isNegativeZero(42));
assert.notOk(isNegativeZero(Infinity));
assert.notOk(isNegativeZero(-Infinity));
assert.notOk(isNegativeZero(NaN));
assert.notOk(isNegativeZero('foo'));
assert.notOk(isNegativeZero(function () {}));
assert.notOk(isNegativeZero([]));
assert.notOk(isNegativeZero({}));

assert.ok(isNegativeZero(-0));

Tests

Simply clone the repo, npm install, and run npm test



EVP_BytesToKey

NPM Package Build Status Dependency status

js-standard-style

The insecure key derivation algorithm from OpenSSL.

WARNING: DO NOT USE, except for compatibility reasons.

MD5 is insecure.

Use at least scrypt or pbkdf2-hmac-sha256 instead.

API

EVP_BytesToKey(password, salt, keyLen, ivLen)

Returns: { key: Buffer, iv: Buffer }

Examples

MD5 with aes-256-cbc:

const crypto = require('crypto')
const EVP_BytesToKey = require('evp_bytestokey')

const result = EVP_BytesToKey(
  'my-secret-password',
  null,
  32,
  16
)
// =>
// { key: <Buffer e3 4f 96 f3 86 24 82 7c c2 5d ff 23 18 6f 77 72 54 45 7f 49 d4 be 4b dd 4f 6e 1b cc 92 a4 27 33>,
//   iv: <Buffer 85 71 9a bf ae f4 1e 74 dd 46 b6 13 79 56 f5 5b> }

const cipher = crypto.createCipheriv('aes-256-cbc', result.key, result.iv)


once

Only call a function once.

usage

var once = require('once')

function load (file, cb) {
  cb = once(cb)
  loader.load('file')
  loader.once('load', cb)
  loader.once('error', cb)
}

Or add to the Function.prototype in a responsible way:

// only has to be done once
require('once').proto()

function load (file, cb) {
  cb = cb.once()
  loader.load('file')
  loader.once('load', cb)
  loader.once('error', cb)
}

Ironically, the prototype feature makes this module twice as complicated as necessary.

To check whether you function has been called, use fn.called. Once the function is called for the first time the return value of the original function is saved in fn.value and subsequent calls will continue to return this value.

var once = require('once')

function load (cb) {
  cb = once(cb)
  var stream = createStream()
  stream.once('data', cb)
  stream.once('end', function () {
    if (!cb.called) cb(new Error('not found'))
  })
}

once.strict(func)

Throw an error if the function is called twice.

Some functions are expected to be called only once. Using once for them would potentially hide logical errors.

In the example below, the greet function has to call the callback only once:

function greet (name, cb) {
  // return is missing from the if statement
  // when no name is passed, the callback is called twice
  if (!name) cb('Hello anonymous')
  cb('Hello ' + name)
}

function log (msg) {
  console.log(msg)
}

// this will print 'Hello anonymous' but the logical error will be missed
greet(null, once(msg))

// once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time
greet(null, once.strict(msg))


sha.js

NPM Package Build Status Dependency status

js-standard-style

Node style SHA on pure JavaScript.

var shajs = require('sha.js')

console.log(shajs('sha256').update('42').digest('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049
console.log(new shajs.sha256().update('42').digest('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049

var sha256stream = shajs('sha256')
sha256stream.end('42')
console.log(sha256stream.read().toString('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049

supported hashes

sha.js currently implements:

Not an actual stream

Note, this doesn’t actually implement a stream, but wrapping this in a stream is trivial. It does update incrementally, so you can hash things larger than RAM, as it uses a constant amount of memory (except when using base64 or utf8 encoding, see code comments).

Acknowledgements

This work is derived from Paul Johnston’s A JavaScript implementation of the Secure Hash Algorithm.



compressible

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Compressible Content-Type / mime checking.

Installation

$ npm install compressible

API

var compressible = require('compressible')

compressible(type)

Checks if the given Content-Type is compressible. The type argument is expected to be a value MIME type or Content-Type string, though no validation is performed.

The MIME is looked up in the mime-db and if there is compressible information in the database entry, that is returned. Otherwise, this module will fallback to true for the following types:

If this module is not sure if a type is specifically compressible or specifically uncompressible, undefined is returned.

compressible('text/html') // => true
compressible('image/png') // => false


string_decoder

Node-core v8.9.4 string_decoder for userland

NPM NPM

npm install --save string_decoder

Node-core string_decoder for userland

This package is a mirror of the string_decoder implementation in Node-core.

Full documentation may be found on the Node.js website.

As of version 1.0.0 string_decoder uses semantic versioning.

Previous versions

Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10.

Update

The build/ directory contains a build script that will scrape the source from the nodejs/node repo given a specific Node version.

Streams Working Group

string_decoder is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:

See readable-stream for more details.



string_decoder

Node-core v8.9.4 string_decoder for userland

NPM NPM

npm install --save string_decoder

Node-core string_decoder for userland

This package is a mirror of the string_decoder implementation in Node-core.

Full documentation may be found on the Node.js website.

As of version 1.0.0 string_decoder uses semantic versioning.

Previous versions

Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10.

Update

The build/ directory contains a build script that will scrape the source from the nodejs/node repo given a specific Node version.

Streams Working Group

string_decoder is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:

See readable-stream for more details.



has-symbols Version Badge

npm badge

Example

var hasSymbols = require('has-symbols');

hasSymbols() === true; // if the environment has native Symbol support. Not polyfillable, not forgeable.

var hasSymbolsKinda = require('has-symbols/shams');
hasSymbolsKinda() === true; // if the environment has a Symbol sham that mostly follows the spec.

Tests

Simply clone the repo, npm install, and run npm test



Dojo Themes

Package that contains a collection of Dojo themes.

Please Note: If you are looking for Dojo 1 themes, these have been relocated to @dojo/dijit-themes. The github url registered with bower has also been updated to point to the new repository, if you encounter any issues please run bower cache clean and try again.

Usage

With Dojo applications

  1. Install @dojo/themes with npm i @dojo/themes.
  2. Import the theme CSS into your project’s main.css: @import '~@dojo/themes/dojo/index.css.
  3. Import the theme module and pass it to the widgets you need themed:
import theme from '@dojo/themes/dojo';

render() {
    return w(Button, { theme }, [ 'Hello World' ]);
}

With custom elements

  1. Install @dojo/themes with npm i @dojo/themes.
  2. Add the custom element-specific theme CSS to index.html: <link rel="stylesheet" href="node_modules/@dojo/themes/dojo/dojo-{version}.css">.
  3. Add the custom element-specific theme JS to index.html: <script src="node_modules/@dojo/themes/dojo/dojo-{version}.js"></script>.

Composition

To compose and extend the themes within a dojo project, run npm i @dojo/themes and use the css-module composes functionality. Variables can be used by using @import to import the variables.css file from a theme. This functionality is added by a post-css plugin within the dojo build command.

/* myButton.m.css */
@import '@dojo/themes/dojo/variables.css';

.root {
    composes: root from '@dojo/themes/dojo/button.m.css';
    background-color: var(--dojo-green);
}

Generating typings

The following npm scripts are available to facilitate development:



eslint-plugin-eslint-comments

npm version Downloads/month Build Status codecov Dependency Status

Additional ESLint rules for ESLint directive comments (e.g. //eslint-disable-line).

📖 Usage

🚥 Semantic Versioning Policy

eslint-plugin-eslint-comments follows semantic versioning and ESLint’s Semantic Versioning Policy.

📰 Changelog

🍻 Contributing

Welcome contributing!

Please use GitHub’s Issues/PRs.

Development Tools



is-string Version Badge

npm badge

browser support

Is this value a JS String object or primitive? This module works cross-realm/iframe, and despite ES6 @@toStringTag.

Example

var isString = require('is-string');
var assert = require('assert');

assert.notOk(isString(undefined));
assert.notOk(isString(null));
assert.notOk(isString(false));
assert.notOk(isString(true));
assert.notOk(isString(function () {}));
assert.notOk(isString([]));
assert.notOk(isString({}));
assert.notOk(isString(/a/g));
assert.notOk(isString(new RegExp('a', 'g')));
assert.notOk(isString(new Date()));
assert.notOk(isString(42));
assert.notOk(isString(NaN));
assert.notOk(isString(Infinity));
assert.notOk(isString(new Number(42)));

assert.ok(isString('foo'));
assert.ok(isString(Object('foo')));

Tests

Simply clone the repo, npm install, and run npm test



reserved-words

Build Status

What is it?

Tiny package for detecting reserved words.

Reserved Word is either a Keyword, or a Future Reserved Word, or a Null Literal, or a Boolean Literal. See: ES5 #7.6.1 and ES6 #11.6.2.

Installation

npm install reserved-words

API

check(word, [dialect], strict)

Returns true if provided identifier string is a Reserved Word in some ECMAScript dialect (ECMA-262 edition).

If the strict flag is truthy, this function additionally checks whether word is a Keyword or Future Reserved Word under strict mode.

Example

var reserved = require('reserved-words');
reserved.check('volatile', 'es3'); // true
reserved.check('volatile', 'es2015'); // false
reserved.check('yield', 3); // false
reserved.check('yield', 6); // true

dialects

es3 (or 3)

Represents ECMA-262 3rd edition.

See section 7.5.1.

es5 (or 5)

Represents ECMA-262 5th edition (ECMAScript 5.1).

Reserved Words are formally defined in ECMA262 sections 7.6.1.1 and 7.6.1.2.

es2015 (or es6, 6)

Represents ECMA-262 6th edition.

Reserved Words are formally defined in sections 11.6.2.1 and 11.6.2.2.



is-get-set-prop

NPM version Build Status Coverage Status

Code Climate Dependencies DevDependencies

Does a JS type have a getter/setter property

Install

npm install --save is-get-set-prop

Usage

ES2015

import isGetSetProp from 'is-get-set-prop';

isGetSetProp('array', 'length');
// => true

isGetSetProp('ARRAY', 'push');
// => false

// is-get-set-prop can only verify native JS types
isGetSetProp('gulp', 'task');
// => false;

ES5

var isGetSetProp = require('is-get-set-prop');

isGetSetProp('array', 'length');
// => true

isGetSetProp('ARRAY', 'push');
// => false

// is-get-set-prop can only verify native JS types
isGetSetProp('customObject', 'customGetterOrSetter');
// => false;

API

isGetSetProp(type, propertyName)

type

Type: string

A native JS type to examine. Note: is-get-set-prop can only verify native JS types.

propertyName

Type: string

Property name to determine if a getter/setter of type.



ecdsa-sig-formatter

Build Status Coverage Status

Translate between JOSE and ASN.1/DER encodings for ECDSA signatures

Install

npm install ecdsa-sig-formatter --save

Usage

var format = require('ecdsa-sig-formatter');

var derSignature = '..'; // asn.1/DER encoded ecdsa signature

var joseSignature = format.derToJose(derSignature);

API


.derToJose(Buffer|String signature, String alg) -> String

Convert the ASN.1/DER encoded signature to a JOSE-style concatenated signature. Returns a base64 url encoded String.


.joseToDer(Buffer|String signature, String alg) -> Buffer

Convert the JOSE-style concatenated signature to an ASN.1/DER encoded signature. Returns a Buffer

Contributing

  1. Fork the repository. Committing directly against this repository is highly discouraged.

  2. Make your modifications in a branch, updating and writing new unit tests as necessary in the spec directory.

  3. Ensure that all tests pass with npm test

  4. rebase your changes against master. Do not merge.

  5. Submit a pull request to this repository. Wait for tests to run and someone to chime in.

Code Style

This repository is configured with EditorConfig and ESLint rules.



minimalistic-crypto-utils

Build Status NPM version

Very minimal utils that are required in order to write reasonable JS-only crypto module.

Usage

const utils = require('minimalistic-crypto-utils');

utils.toArray('abcd', 'hex');
utils.encode([ 1, 2, 3, 4 ], 'hex');
utils.toHex([ 1, 2, 3, 4 ]);


isarray

Array#isArray for older browsers.

build status downloads

browser support

Usage

var isArray = require('isarray');

console.log(isArray([])); // => true
console.log(isArray({})); // => false

Installation

With npm do

$ npm install isarray

Then bundle for the browser with browserify.

With component do

$ component install juliangruber/isarray


pumpify

Combine an array of streams into a single duplex stream using pump and duplexify. If one of the streams closes/errors all streams in the pipeline will be destroyed.

npm install pumpify

build status

Usage

Pass the streams you want to pipe together to pumpify pipeline = pumpify(s1, s2, s3, ...). pipeline is a duplex stream that writes to the first streams and reads from the last one. Streams are piped together using pump so if one of them closes all streams will be destroyed.

var pumpify = require('pumpify')
var tar = require('tar-fs')
var zlib = require('zlib')
var fs = require('fs')

var untar = pumpify(zlib.createGunzip(), tar.extract('output-folder'))
// you can also pass an array instead
// var untar = pumpify([zlib.createGunzip(), tar.extract('output-folder')])

fs.createReadStream('some-gzipped-tarball.tgz').pipe(untar)

If you are pumping object streams together use pipeline = pumpify.obj(s1, s2, ...). Call pipeline.destroy() to destroy the pipeline (including the streams passed to pumpify).

Using setPipeline(s1, s2, ...)

Similar to duplexify you can also define the pipeline asynchronously using setPipeline(s1, s2, ...)

var untar = pumpify()

setTimeout(function() {
  // will start draining the input now
  untar.setPipeline(zlib.createGunzip(), tar.extract('output-folder'))
}, 1000)

fs.createReadStream('some-gzipped-tarball.tgz').pipe(untar)

pumpify is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.



console-browserify Build Status

Emulate console for all the browsers

Install

You usually do not have to install console-browserify yourself! If your code runs in Node.js, console is built in. If your code runs in the browser, bundlers like browserify or webpack also include the console-browserify module when you do require('console').

But if none of those apply, with npm do:

npm install console-browserify

Usage

var console = require("console")
// Or when manually using console-browserify directly:
// var console = require("console-browserify")

console.log("hello world!")

API

See the Node.js Console docs. console-browserify does not support creating new Console instances and does not support the Inspector-only methods.

Contributing

PRs are very welcome! The main way to contribute to console-browserify is by porting features, bugfixes and tests from Node.js. Ideally, code contributions to this module are copy-pasted from Node.js and transpiled to ES5, rather than reimplemented from scratch. Matching the Node.js code as closely as possible makes maintenance simpler when new changes land in Node.js. This module intends to provide exactly the same API as Node.js, so features that are not available in the core console module will not be accepted. Feature requests should instead be directed at nodejs/node and will be added to this module once they are implemented in Node.js.

If there is a difference in behaviour between Node.js’s console module and this module, please open an issue!



pascalcase NPM version

Convert a string to pascal-case.

Install

Install with npm

$ npm i pascalcase --save

Usage

var pascalcase = require('pascalcase');

pascalcase('a');
//=> 'A'

pascalcase('foo bar baz');
//=> 'FooBarBaz'

pascalcase('foo_bar-baz');
//=> 'FooBarBaz'

pascalcase('foo.bar.baz');
//=> 'FooBarBaz'

pascalcase('foo/bar/baz');
//=> 'FooBarBaz'

pascalcase('foo[bar)baz');
//=> 'FooBarBaz'

pascalcase('#foo+bar*baz');
//=> 'FooBarBaz'

pascalcase('$foo~bar`baz');
//=> 'FooBarBaz'

pascalcase('_foo_bar-baz-');
//=> 'FooBarBaz'

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue

Author

Jon Schlinkert


This file was generated by verb-cli on August 19, 2015. # responselike

A response-like object for mocking a Node.js HTTP response stream

Build Status Coverage Status npm npm

Returns a streamable response object similar to a Node.js HTTP response stream. Useful for formatting cached responses so they can be consumed by code expecting a real response.

Install

npm install --save responselike

Or if you’re just using for testing you’ll want:

npm install --save-dev responselike

Usage

const Response = require('responselike');

const response = new Response(200, { foo: 'bar' }, Buffer.from('Hi!'), 'https://example.com');

response.statusCode;
// 200
response.headers;
// { foo: 'bar' }
response.body;
// <Buffer 48 69 21>
response.url;
// 'https://example.com'

response.pipe(process.stdout);
// Hi!

API

new Response(statusCode, headers, body, url)

Returns a streamable response object similar to a Node.js HTTP response stream.

statusCode

Type: number

HTTP response status code.

headers

Type: object

HTTP headers object. Keys will be automatically lowercased.

body

Type: buffer

A Buffer containing the response body. The Buffer contents will be streamable but is also exposed directly as response.body.

url

Type: string

Request URL string.



Acorn-JSX

Build Status NPM version

This is plugin for Acorn - a tiny, fast JavaScript parser, written completely in JavaScript.

It was created as an experimental alternative, faster React.js JSX parser. Later, it replaced the official parser and these days is used by many prominent development tools.

Transpiler

Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out Babel and Buble transpilers which use acorn-jsx under the hood.

Usage

Requiring this module provides you with an Acorn plugin that you can use like this:

var acorn = require("acorn");
var jsx = require("acorn-jsx");
acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');");

Note that official spec doesn’t support mix of XML namespaces and object-style access in tag names (#27) like in <namespace:Object.Property />, so it was deprecated in acorn-jsx@3.0. If you still want to opt-in to support of such constructions, you can pass the following option:

acorn.Parser.extend(jsx({ allowNamespacedObjects: true }))

Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely:

acorn.Parser.extend(jsx({ allowNamespaces: false }))

Note that by default allowNamespaces is enabled for spec compliancy.



eslint-plugin-es

npm version Downloads/month Build Status Coverage Status Dependency Status

ESLint plugin which disallows each ECMAScript syntax.

🏁 Goal

Espree, the default parser of ESLint, has supported ecmaVersion option. However, it doesn’t support to enable each syntactic feature individually.

This plugin lets us disable each syntactic feature individually. So we can enable arbitrary syntactic features with the combination of ecmaVersion and this plugin.

📖 Usage

See documentation

🚥 Semantic Versioning Policy

This plugin follows semantic versioning and ESLint’s semantic versioning policy.

📰 Changelog

See releases.

❤️ Contributing

Welcome contributing!

Please use GitHub’s Issues/PRs.

Development Tools



extend-shallow NPM version Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Install

Install with npm

$ npm i extend-shallow --save

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb-cli on June 29, 2015. # extend-shallow NPM version Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Install

Install with npm

$ npm i extend-shallow --save

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb-cli on June 29, 2015. # extend-shallow NPM version Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Install

Install with npm

$ npm i extend-shallow --save

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb-cli on June 29, 2015. # extend-shallow NPM version Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Install

Install with npm

$ npm i extend-shallow --save

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb-cli on June 29, 2015. # extend-shallow NPM version Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Install

Install with npm

$ npm i extend-shallow --save

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb-cli on June 29, 2015. # extend-shallow NPM version Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Install

Install with npm

$ npm i extend-shallow --save

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb-cli on June 29, 2015. # registry-auth-token

npm versionBuild Status

Get the auth token set for an npm registry from .npmrc. Also allows fetching the configured registry URL for a given npm scope.

Installing

npm install --save registry-auth-token

Usage

Returns an object containing token and type, or undefined if no token can be found. type can be either Bearer or Basic.

var getAuthToken = require('registry-auth-token')
var getRegistryUrl = require('registry-auth-token/registry-url')

// Get auth token and type for default `registry` set in `.npmrc`
console.log(getAuthToken()) // {token: 'someToken', type: 'Bearer'}

// Get auth token for a specific registry URL
console.log(getAuthToken('//registry.foo.bar'))

// Find the registry auth token for a given URL (with deep path):
// If registry is at `//some.host/registry`
// URL passed is `//some.host/registry/deep/path`
// Will find token the closest matching path; `//some.host/registry`
console.log(getAuthToken('//some.host/registry/deep/path', {recursive: true}))

// Find the configured registry url for scope `@foobar`.
// Falls back to the global registry if not defined.
console.log(getRegistryUrl('@foobar'))

// Use the npm config that is passed in
console.log(getRegistryUrl('http://registry.foobar.eu/', {
  npmrc: {
    'registry': 'http://registry.foobar.eu/',
    '//registry.foobar.eu/:_authToken': 'qar'
  }
}))

Return value

// If auth info can be found:
{token: 'someToken', type: 'Bearer'}

// Or:
{token: 'someOtherToken', type: 'Basic'}

// Or, if nothing is found:
undefined

Security

Please be careful when using this. Leaking your auth token is dangerous.



es-to-primitive Version Badge

npm badge

ECMAScript “ToPrimitive” algorithm. Provides ES5 and ES2015 versions. When different versions of the spec conflict, the default export will be the latest version of the abstract operation. Alternative versions will also be available under an es5/es2015 exported property if you require a specific version.

Example

var toPrimitive = require('es-to-primitive');
var assert = require('assert');

assert(toPrimitive(function () {}) === String(function () {}));

var date = new Date();
assert(toPrimitive(date) === String(date));

assert(toPrimitive({ valueOf: function () { return 3; } }) === 3);

assert(toPrimitive(['a', 'b', 3]) === String(['a', 'b', 3]));

var sym = Symbol();
assert(toPrimitive(Object(sym)) === sym);

Tests

Simply clone the repo, npm install, and run npm test



universalify

Travis branch Coveralls github branch npm npm

Make a callback- or promise-based function support both promises and callbacks.

Uses the native promise implementation.

Installation

npm install universalify

API

universalify.fromCallback(fn)

Takes a callback-based function to universalify, and returns the universalified function.

Function must take a callback as the last parameter that will be called with the signature (error, result). universalify does not support calling the callback with three or more arguments, and does not ensure that the callback is only called once.

function callbackFn (n, cb) {
  setTimeout(() => cb(null, n), 15)
}

const fn = universalify.fromCallback(callbackFn)

// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))

// Works with Callbacks:
fn('Hi!', (error, result) => {
  if (error) return console.error(error)
  console.log(result)
  // -> Hi!
})

universalify.fromPromise(fn)

Takes a promise-based function to universalify, and returns the universalified function.

Function must return a valid JS promise. universalify does not ensure that a valid promise is returned.

function promiseFn (n) {
  return new Promise(resolve => {
    setTimeout(() => resolve(n), 15)
  })
}

const fn = universalify.fromPromise(promiseFn)

// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))

// Works with Callbacks:
fn('Hi!', (error, result) => {
  if (error) return console.error(error)
  console.log(result)
  // -> Hi!
})


universalify

Travis branch Coveralls github branch npm npm

Make a callback- or promise-based function support both promises and callbacks.

Uses the native promise implementation.

Installation

npm install universalify

API

universalify.fromCallback(fn)

Takes a callback-based function to universalify, and returns the universalified function.

Function must take a callback as the last parameter that will be called with the signature (error, result). universalify does not support calling the callback with three or more arguments, and does not ensure that the callback is only called once.

function callbackFn (n, cb) {
  setTimeout(() => cb(null, n), 15)
}

const fn = universalify.fromCallback(callbackFn)

// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))

// Works with Callbacks:
fn('Hi!', (error, result) => {
  if (error) return console.error(error)
  console.log(result)
  // -> Hi!
})

universalify.fromPromise(fn)

Takes a promise-based function to universalify, and returns the universalified function.

Function must return a valid JS promise. universalify does not ensure that a valid promise is returned.

function promiseFn (n) {
  return new Promise(resolve => {
    setTimeout(() => resolve(n), 15)
  })
}

const fn = universalify.fromPromise(promiseFn)

// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))

// Works with Callbacks:
fn('Hi!', (error, result) => {
  if (error) return console.error(error)
  console.log(result)
  // -> Hi!
})


jQuery

jQuery is a fast, small, and feature-rich JavaScript library.

For information on how to get started and how to use jQuery, please see jQuery’s documentation. For source files and issues, please visit the jQuery repo.

If upgrading, please see the blog post for 3.5.1. This includes notable differences from the previous version and a more readable changelog.

Including jQuery

Below are some of the most common ways to include jQuery.

Browser

Script tag

<script src="https://code.jquery.com/jquery-3.5.1.min.js"></script>

Babel

Babel is a next generation JavaScript compiler. One of the features is the ability to use ES6/ES2015 modules now, even though browsers do not yet support this feature natively.

import $ from "jquery";

Browserify/Webpack

There are several ways to use Browserify and Webpack. For more information on using these tools, please refer to the corresponding project’s documentation. In the script, including jQuery will usually look like this…

var $ = require( "jquery" );

AMD (Asynchronous Module Definition)

AMD is a module format built for the browser. For more information, we recommend require.js’ documentation.

define( [ "jquery" ], function( $ ) {

} );

Node

To include jQuery in Node, first install with npm.

npm install jquery

For jQuery to work in Node, a window with a document is required. Since no such window exists natively in Node, one can be mocked by tools such as jsdom. This can be useful for testing purposes.

const { JSDOM } = require( "jsdom" );
const { window } = new JSDOM( "" );
const $ = require( "jquery" )( window );


is-proto-prop

NPM version Build Status Coverage Status

Code Climate Dependencies DevDependencies

Does a JS type’s prototype have a property

Uses Sindre Sorhusproto-props

Install

npm install --save is-proto-prop

Usage

ES2015

import isProtoProp from 'is-proto-prop';

isProtoProp('array', 'length');
// => true

isProtoProp('Error', 'ignore');
// => false

// `is-proto-props` can only verify native JS types
isProtoProp('gulp', 'task');
// => false

ES5

var isProtoProp = require('is-proto-prop');

isProtoProp('array', 'length');
// => true

isProtoProp('Error', 'ignore');
// => false

// `is-proto-props` can only verify native JS types
isProtoProp('gulp', 'task');
// => false

API

isProtoProp(type, propertyName)

Returns a Boolean if propertyName is on type’s prototype.

type

type: string

JS type to examine the prototype of. Note: is-proto-prop only looks at native JS types.

propertyName

type: string

Property name to look for on type’s prototype. Note: propertyName is case sensitive.



performance-now Build Status Dependency Status

Implements a function similar to performance.now (based on process.hrtime).

Modern browsers have a window.performance object with - among others - a now method which gives time in milliseconds, but with sub-millisecond precision. This module offers the same function based on the Node.js native process.hrtime function.

According to the High Resolution Time specification, the number of milliseconds reported by performance.now should be relative to the value of performance.timing.navigationStart.

In the current version of the module (2.0) the reported time is relative to the time the current Node process has started (inferred from process.uptime()).

Version 1.0 reported a different time. The reported time was relative to the time the module was loaded (i.e. the time it was first required). If you need this functionality, version 1.0 is still available on NPM.

Example usage

var now = require("performance-now")
var start = now()
var end = now()
console.log(start.toFixed(3)) // the number of milliseconds the current node process is running
console.log((start-end).toFixed(3)) // ~ 0.002 on my system

Running the now function two times right after each other yields a time difference of a few microseconds. Given this overhead, I think it’s best to assume that the precision of intervals computed with this method is not higher than 10 microseconds, if you don’t know the exact overhead on your own system.



es-abstract Version Badge

npm badge

browser support

ECMAScript spec abstract operations. When different versions of the spec conflict, the default export will be the latest version of the abstract operation. All abstract operations will also be available under an es5/es2015/es2016/es2017/es2018/es2019 entry point, and exported property, if you require a specific version.

Example

var ES = require('es-abstract');
var assert = require('assert');

assert(ES.isCallable(function () {}));
assert(!ES.isCallable(/a/g));

Tests

Simply clone the repo, npm install, and run npm test

Security

Please email [@ljharb](https://github.com/ljharb) or see https://tidelift.com/security if you have a potential security vulnerability to report.



is-callable Version Badge

npm badge

browser support

Is this JS value callable? Works with Functions and GeneratorFunctions, despite ES6 @@toStringTag.

Example

var isCallable = require('is-callable');
var assert = require('assert');

assert.notOk(isCallable(undefined));
assert.notOk(isCallable(null));
assert.notOk(isCallable(false));
assert.notOk(isCallable(true));
assert.notOk(isCallable([]));
assert.notOk(isCallable({}));
assert.notOk(isCallable(/a/g));
assert.notOk(isCallable(new RegExp('a', 'g')));
assert.notOk(isCallable(new Date()));
assert.notOk(isCallable(42));
assert.notOk(isCallable(NaN));
assert.notOk(isCallable(Infinity));
assert.notOk(isCallable(new Number(42)));
assert.notOk(isCallable('foo'));
assert.notOk(isCallable(Object('foo')));

assert.ok(isCallable(function () {}));
assert.ok(isCallable(function* () {}));
assert.ok(isCallable(x => x * x));

Install

Install with

npm install is-callable

Tests

Simply clone the repo, npm install, and run npm test



clone-response

Clone a Node.js HTTP response stream

Build Status Coverage Status npm npm

Returns a new stream and copies over all properties and methods from the original response giving you a complete duplicate.

This is useful in situations where you need to consume the response stream but also want to pass an unconsumed stream somewhere else to be consumed later.

Install

npm install --save clone-response

Usage

const http = require('http');
const cloneResponse = require('clone-response');

http.get('http://example.com', response => {
  const clonedResponse = cloneResponse(response);
  response.pipe(process.stdout);

  setImmediate(() => {
    // The response stream has already been consumed by the time this executes,
    // however the cloned response stream is still available.
    doSomethingWithResponse(clonedResponse);
  });
});

Please bear in mind that the process of cloning a stream consumes it. However, you can consume a stream multiple times in the same tick, therefore allowing you to create multiple clones. e.g:

const clone1 = cloneResponse(response);
const clone2 = cloneResponse(response);
// response can still be consumed in this tick but cannot be consumed if passed
// into any async callbacks. clone1 and clone2 can be passed around and be
// consumed in the future.

API

cloneResponse(response)

Returns a clone of the passed in response.

response

Type: stream

A Node.js HTTP response stream to clone.



JSON Schema for HTTP Archive (HAR).

Build Status Downloads Code Climate Coverage Status Dependency Status Dependencies

Install

npm install --only=production --save har-schema

Usage

Compatible with any JSON Schema validation tool.




mime

Comprehensive MIME type mapping API based on mime-db module.

Install

Install with npm:

npm install mime

Contributing / Testing

npm run test

Command Line

mime [path_string]

E.g.

> mime scripts/jquery.js
application/javascript

API - Queries

mime.lookup(path)

Get the mime type associated with a file, if no mime type is found application/octet-stream is returned. Performs a case-insensitive lookup using the extension in path (the substring after the last ‘/’ or ‘.’). E.g.

var mime = require('mime');

mime.lookup('/path/to/file.txt');         // => 'text/plain'
mime.lookup('file.txt');                  // => 'text/plain'
mime.lookup('.TXT');                      // => 'text/plain'
mime.lookup('htm');                       // => 'text/html'

mime.default_type

Sets the mime type returned when mime.lookup fails to find the extension searched for. (Default is application/octet-stream.)

mime.extension(type)

Get the default extension for type

mime.extension('text/html');                 // => 'html'
mime.extension('application/octet-stream');  // => 'bin'

mime.charsets.lookup()

Map mime-type to charset

mime.charsets.lookup('text/plain');        // => 'UTF-8'

(The logic for charset lookups is pretty rudimentary. Feel free to suggest improvements.)

API - Defining Custom Types

Custom type mappings can be added on a per-project basis via the following APIs.

mime.define()

Add custom mime/extension mappings

mime.define({
    'text/x-some-format': ['x-sf', 'x-sft', 'x-sfml'],
    'application/x-my-type': ['x-mt', 'x-mtt'],
    // etc ...
});

mime.lookup('x-sft');                 // => 'text/x-some-format'

The first entry in the extensions array is returned by mime.extension(). E.g.

mime.extension('text/x-some-format'); // => 'x-sf'

mime.load(filepath)

Load mappings from an Apache “.types” format file

mime.load('./my_project.types');

The .types file format is simple - See the types dir for examples.



v8-compile-cache

Build Status

v8-compile-cache attaches a require hook to use V8’s code cache to speed up instantiation time. The “code cache” is the work of parsing and compiling done by V8.

The ability to tap into V8 to produce/consume this cache was introduced in Node v5.7.0.

Usage

  1. Add the dependency:
$ npm install --save v8-compile-cache
  1. Then, in your entry module add:
require('v8-compile-cache');

Requiring v8-compile-cache in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by v8-compile-cache.

Options

Set the environment variable DISABLE_V8_COMPILE_CACHE=1 to disable the cache.

Cache directory is defined by environment variable V8_COMPILE_CACHE_CACHE_DIR or defaults to <os.tmpdir()>/v8-compile-cache-<V8_VERSION>.

Internals

Cache files are suffixed .BLOB and .MAP corresponding to the entry module that required v8-compile-cache. The cache is entry module specific because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis.

Benchmarks

See https://github.com/zertosh/v8-compile-cache/tree/master/bench.

Load Times:

Module Without Cache With Cache
babel-core 218ms 185ms
yarn 153ms 113ms
yarn (bundled) 228ms 105ms

^ Includes the overhead of loading the cache itself.

Acknowledgements

ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo:

demo

The following selectors are supported: * AST node type: ForStatement * wildcard: * * attribute existence: [attr] * attribute value: [attr="foo"] or [attr=123] * attribute regex: [attr=/foo.*/] or (with flags) [attr=/foo.*/is] * attribute conditions: [attr!="foo"], [attr>2], [attr<3], [attr>=2], or [attr<=3] * nested attribute: [attr.level2="foo"] * field: FunctionDeclaration > Identifier.id * First or last child: :first-child or :last-child * nth-child (no ax+b support): :nth-child(2) * nth-last-child (no ax+b support): :nth-last-child(1) * descendant: ancestor descendant * child: parent > child * following sibling: node ~ sibling * adjacent sibling: node + adjacent * negation: :not(ForStatement) * has: :has(ForStatement) * matches-any: :matches([attr] > :first-child, :last-child) * subject indicator: !IfStatement > [name="foo"] * class of AST node: :statement, :expression, :declaration, :function, or :pattern

Build Status

Build

This is a fast polyfill for TextEncoder and TextDecoder, which let you encode and decode JavaScript strings into UTF-8 bytes.

It is fast partially as it does not support any encodings aside UTF-8 (and note that natively, only TextDecoder supports alternative encodings anyway). See some benchmarks.



Usage

Install as “fast-text-encoding” via your favourite package manager.

You only need this polyfill if you’re supporting older browsers like IE, legacy Edge, ancient Chrome and Firefox, or Node before v11.

Browser

Include the minified code inside a script tag or as an ES6 Module for its side effects. It will create TextEncoder and TextDecoder if the symbols are missing on window or global.

<script src="node_modules/fast-text-encoding/text.min.js"></script>
<script type="module">
  import './node_modules/fast-text-encoding/text.min.js';
  import 'fast-text-encoding';  // or perhaps this
  // confidently do something with TextEncoder or TextDecoder \o/
</script>

⚠️ You’ll probably want to depend on text.min.js, as it’s compiled to ES5 for older environments.

Node

You only need this polyfill in Node before v11. However, you can use Buffer to provide the same functionality (but not conforming to any spec) in versions even older than that.

require('fast-text-encoding');  // just require me before use

const buffer = new TextEncoder().encode('Turn me into UTF-8!');
// buffer is now a Uint8Array of [84, 117, 114, 110, ...]

In Node v5.1 and above, this polyfill uses Buffer to implement TextDecoder.



Release

Compile code with Closure Compiler.

// ==ClosureCompiler==
// @compilation_level ADVANCED_OPTIMIZATIONS
// @output_file_name text.min.js
// ==/ClosureCompiler==

// code here


Destroy

Destroy a stream.

This module is meant to ensure a stream gets destroyed, handling different APIs and Node.js bugs.

API

var destroy = require('destroy')

destroy(stream)

Destroy the given stream. In most cases, this is identical to a simple stream.destroy() call. The rules are as follows for a given stream:

  1. If the stream is an instance of ReadStream, then call stream.destroy() and add a listener to the open event to call stream.close() if it is fired. This is for a Node.js bug that will leak a file descriptor if .destroy() is called before open.
  2. If the stream is not an instance of Stream, then nothing happens.
  3. If the stream has a .destroy() method, then call it.

The function returns the stream passed in as the argument.

Example

var destroy = require('destroy')

var fs = require('fs')
var stream = fs.createReadStream('package.json')

// ... and later
destroy(stream)


json-parse-better-errors is a Node.js library for getting nicer errors out of JSON.parse(), including context and position of the parse errors.

Install

npm install --save json-parse-better-errors

Table of Contents

Example

const parseJson = require('json-parse-better-errors')

parseJson('"foo"')
parseJson('garbage') // more useful error message

Features

Contributing

The npm team enthusiastically welcomes contributions and project participation! There’s a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don’t hesitate to jump in if you’d like to, or even ask us questions if something isn’t clear.

All participants and maintainers in this project are expected to follow Code of Conduct, and just generally be excellent to each other.

Please refer to the Changelog for project history details, too.

Happy hacking!

API

> parse(txt, ?reviver, ?context=20)

Works just like JSON.parse, but will include a bit more information when an error happens.



object.values Version Badge

npm badge

browser support

An ES2017 spec-compliant Object.values shim. Invoke its “shim” method to shim Object.values if it is unavailable or noncompliant.

This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec.

Most common usage:

var assert = require('assert');
var values = require('object.values');

var obj = { a: 1, b: 2, c: 3 };
var expected = [1, 2, 3];

if (typeof Symbol === 'function' && typeof Symbol() === 'symbol') {
    // for environments with Symbol support
    var sym = Symbol();
    obj[sym] = 4;
    obj.d = sym;
    expected.push(sym);
}

assert.deepEqual(values(obj), expected);

if (!Object.values) {
    values.shim();
}

assert.deepEqual(Object.values(obj), expected);

Tests

Simply clone the repo, npm install, and run npm test



http-timer

Timings for HTTP requests

Build Status Coverage Status install size

Inspired by the request package.

Usage

'use strict';
const https = require('https');
const timer = require('@szmarczak/http-timer');

const request = https.get('https://httpbin.org/anything');
const timings = timer(request);

request.on('response', response => {
    response.on('data', () => {}); // Consume the data somehow
    response.on('end', () => {
        console.log(timings);
    });
});

// { start: 1535708511443,
//   socket: 1535708511444,
//   lookup: 1535708511444,
//   connect: 1535708511582,
//   upload: 1535708511887,
//   response: 1535708512037,
//   end: 1535708512040,
//   phases:
//    { wait: 1,
//      dns: 0,
//      tcp: 138,
//      request: 305,
//      firstByte: 150,
//      download: 3,
//      total: 597 } }

API

timer(request)

Returns: Object

Note: The time is a number representing the milliseconds elapsed since the UNIX epoch.

String.prototype.trimEnd Version Badge

npm badge

browser support

An ES2019-spec-compliant String.prototype.trimEnd shim. Invoke its “shim” method to shim String.prototype.trimEnd if it is unavailable.

This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec. In an ES6 environment, it will also work properly with Symbols.

Most common usage:

var trimEnd = require('string.prototype.trimend');

assert(trimEnd(' \t\na \t\n') === 'a \t\n');

if (!String.prototype.trimEnd) {
    trimEnd.shim();
}

assert(trimEnd(' \t\na \t\n ') === ' \t\na \t\n '.trimEnd());

Tests

Simply clone the repo, npm install, and run npm test



range-parser

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Range header field parser.

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install range-parser

API

var parseRange = require('range-parser')

parseRange(size, header, options)

Parse the given header string where size is the maximum size of the resource. An array of ranges will be returned or negative numbers indicating an error parsing.

// parse header from request
var range = parseRange(size, req.headers.range)

// the type of the range
if (range.type === 'bytes') {
  // the ranges
  range.forEach(function (r) {
    // do something with r.start and r.end
  })
}

Options

These properties are accepted in the options object.

combine

Specifies if overlapping & adjacent ranges should be combined, defaults to false. When true, ranges will be combined and returned as if they were specified that way in the header.

parseRange(100, 'bytes=50-55,0-10,5-10,56-60', { combine: true })
// => [
//      { start: 0,  end: 10 },
//      { start: 50, end: 60 }
//    ]

String.prototype.trimStart Version Badge

npm badge

browser support

An ES2019-spec-compliant String.prototype.trimStart shim. Invoke its “shim” method to shim String.prototype.trimStart if it is unavailable.

This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec. In an ES6 environment, it will also work properly with Symbols.

Most common usage:

var trimStart = require('string.prototype.trimstart');

assert(trimStart(' \t\na \t\n') === 'a \t\n');

if (!String.prototype.trimStart) {
    trimStart.shim();
}

assert(trimStart(' \t\na \t\n') === ' \t\na \t\n'.trimStart());

Tests

Simply clone the repo, npm install, and run npm test



contains-path NPM version

Return true if a file path contains the given path.

Install

Install with npm

$ npm i contains-path --save

Usage

var contains = require('contains-path');

true

All of the following return true:

containsPath('./a/b/c', 'a');
containsPath('./a/b/c', 'a/b');
containsPath('./b/a/b/c', 'a/b');
containsPath('/a/b/c', '/a/b');
containsPath('/a/b/c', 'a/b');
containsPath('a', 'a');
containsPath('a/b/c', 'a');
//=> true

false

All of the following return false:

containsPath('abc', 'a');
containsPath('abc', 'a.md');
containsPath('./b/a/b/c', './a/b');
containsPath('./b/a/b/c', './a');
containsPath('./b/a/b/c', '/a/b');
containsPath('/b/a/b/c', '/a/b');
//=> false

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue

Author

Jon Schlinkert


This file was generated by verb-cli on July 07, 2015. # media-typer

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Simple RFC 6838 media type parser

Installation

$ npm install media-typer

API

var typer = require('media-typer')

typer.parse(string)

var obj = typer.parse('image/svg+xml; charset=utf-8')

Parse a media type string. This will return an object with the following properties (examples are shown for the string 'image/svg+xml; charset=utf-8'):

typer.parse(req)

var obj = typer.parse(req)

Parse the content-type header from the given req. Short-cut for typer.parse(req.headers['content-type']).

typer.parse(res)

var obj = typer.parse(res)

Parse the content-type header set on the given res. Short-cut for typer.parse(res.getHeader('content-type')).

typer.format(obj)

var obj = typer.format({type: 'image', subtype: 'svg', suffix: 'xml'})

Format an object into a media type string. This will return a string of the mime type for the given object. For the properties of the object, see the documentation for typer.parse(string).



to-object-path NPM version

Create an object path from a list or array of strings.

Install

Install with npm

$ npm i to-object-path --save

Usage

var toPath = require('to-object-path');

toPath('foo', 'bar', 'baz');
toPath('foo', ['bar', 'baz']);
//=> 'foo.bar.baz'

Also supports passing an arguments object (without having to slice args):

function foo()
  return toPath(arguments);
}

foo('foo', 'bar', 'baz');
foo('foo', ['bar', 'baz']);
//=> 'foo.bar.baz'

Visit the example to see how this could be used in an application.

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on October 28, 2015. # define-property NPM version

Define a non-enumerable property on an object.

Install

Install with npm

$ npm i define-property --save

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on August 31, 2015.



define-property NPM version

Define a non-enumerable property on an object.

Install

Install with npm

$ npm i define-property --save

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on August 31, 2015.



define-property NPM version

Define a non-enumerable property on an object.

Install

Install with npm

$ npm i define-property --save

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on August 31, 2015.



define-property NPM version

Define a non-enumerable property on an object.

Install

Install with npm

$ npm i define-property --save

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on August 31, 2015.



define-property NPM version

Define a non-enumerable property on an object.

Install

Install with npm

$ npm i define-property --save

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on August 31, 2015.



isStream

Build Status

Test if an object is a Stream

NPM

The missing Stream.isStream(obj): determine if an object is standard Node.js Stream. Works for Node-core Stream objects (for 0.8, 0.10, 0.11, and in theory, older and newer versions) and all versions of readable-stream.

Usage:

var isStream = require('isstream')
var Stream = require('stream')

isStream(new Stream()) // true

isStream({}) // false

isStream(new Stream.Readable())    // true
isStream(new Stream.Writable())    // true
isStream(new Stream.Duplex())      // true
isStream(new Stream.Transform())   // true
isStream(new Stream.PassThrough()) // true

But wait! There’s more!

You can also test for isReadable(obj), isWritable(obj) and isDuplex(obj) to test for implementations of Streams2 (and Streams3) base classes.

var isReadable = require('isstream').isReadable
var isWritable = require('isstream').isWritable
var isDuplex = require('isstream').isDuplex
var Stream = require('stream')

isReadable(new Stream()) // false
isWritable(new Stream()) // false
isDuplex(new Stream())   // false

isReadable(new Stream.Readable())    // true
isReadable(new Stream.Writable())    // false
isReadable(new Stream.Duplex())      // true
isReadable(new Stream.Transform())   // true
isReadable(new Stream.PassThrough()) // true

isWritable(new Stream.Readable())    // false
isWritable(new Stream.Writable())    // true
isWritable(new Stream.Duplex())      // true
isWritable(new Stream.Transform())   // true
isWritable(new Stream.PassThrough()) // true

isDuplex(new Stream.Readable())    // false
isDuplex(new Stream.Writable())    // false
isDuplex(new Stream.Duplex())      // true
isDuplex(new Stream.Transform())   // true
isDuplex(new Stream.PassThrough()) // true

Reminder: when implementing your own streams, please use readable-stream rather than core streams.

NPM version npm download Build Status Coverage Status

Features

API

Esprima can be used to perform lexical analysis (tokenization) or syntactic analysis (parsing) of a JavaScript program.

A simple example on Node.js REPL:

> var esprima = require('esprima');
> var program = 'const answer = 42';

> esprima.tokenize(program);
[ { type: 'Keyword', value: 'const' },
  { type: 'Identifier', value: 'answer' },
  { type: 'Punctuator', value: '=' },
  { type: 'Numeric', value: '42' } ]
  
> esprima.parseScript(program);
{ type: 'Program',
  body:
   [ { type: 'VariableDeclaration',
       declarations: [Object],
       kind: 'const' } ],
  sourceType: 'script' }

For more information, please read the complete documentation. #object-keys Version Badge

npm badge

browser support

An Object.keys shim. Invoke its “shim” method to shim Object.keys if it is unavailable.

Most common usage:

var keys = Object.keys || require('object-keys');

Example

var keys = require('object-keys');
var assert = require('assert');
var obj = {
    a: true,
    b: true,
    c: true
};

assert.deepEqual(keys(obj), ['a', 'b', 'c']);
var keys = require('object-keys');
var assert = require('assert');
/* when Object.keys is not present */
delete Object.keys;
var shimmedKeys = keys.shim();
assert.equal(shimmedKeys, keys);
assert.deepEqual(Object.keys(obj), keys(obj));
var keys = require('object-keys');
var assert = require('assert');
/* when Object.keys is present */
var shimmedKeys = keys.shim();
assert.equal(shimmedKeys, Object.keys);
assert.deepEqual(Object.keys(obj), keys(obj));

Source

Implementation taken directly from es5-shim, with modifications, including from lodash.

Tests

Simply clone the repo, npm install, and run npm test



eslint-import-resolver-webpack

npm

Webpack-literate module resolution plugin for eslint-plugin-import.

Published separately to allow pegging to a specific version in case of breaking changes.

To use with eslint-plugin-import, run:

npm i eslint-import-resolver-webpack -g

or if you manage ESLint as a dev dependency:

# inside your project's working tree
npm install eslint-import-resolver-webpack --save-dev

Will look for webpack.config.js as a sibling of the first ancestral package.json, or a config parameter may be provided with another filename/path either relative to the package.json, or a complete, absolute path.

If multiple webpack configurations are found the first configuration containing a resolve section will be used. Optionally, the config-index (zero-based) setting can be used to select a specific configuration.

---
settings:
  import/resolver: webpack  # take all defaults

or with explicit config file name:

---
settings:
  import/resolver:
    webpack:
      config: 'webpack.dev.config.js'

or with explicit config file index:

---
settings:
  import/resolver:
    webpack:
      config: 'webpack.multiple.config.js'
      config-index: 1   # take the config at index 1

or with explicit config file path relative to your projects’s working directory:

---
settings:
  import/resolver:
    webpack:
      config: './configs/webpack.dev.config.js'

or with explicit config object:

---
settings:
  import/resolver:
    webpack:
      config:
        resolve:
          extensions:
            - .js
            - .jsx

If your config relies on environment variables, they can be specified using the env parameter. If your config is a function, it will be invoked with the value assigned to env:

---
settings:
  import/resolver:
    webpack:
      config: 'webpack.config.js'
      env:
        NODE_ENV: 'local'
        production: true

Get supported eslint-import-resolver-webpack with the Tidelift Subscription



ASN1.js

ASN.1 DER Encoder/Decoder and DSL.

Example

Define model:

var asn = require('asn1.js');

var Human = asn.define('Human', function() {
  this.seq().obj(
    this.key('firstName').octstr(),
    this.key('lastName').octstr(),
    this.key('age').int(),
    this.key('gender').enum({ 0: 'male', 1: 'female' }),
    this.key('bio').seqof(Bio)
  );
});

var Bio = asn.define('Bio', function() {
  this.seq().obj(
    this.key('time').gentime(),
    this.key('description').octstr()
  );
});

Encode data:

var output = Human.encode({
  firstName: 'Thomas',
  lastName: 'Anderson',
  age: 28,
  gender: 'male',
  bio: [
    {
      time: +new Date('31 March 1999'),
      description: 'freedom of mind'
    }
  ]
}, 'der');

Decode data:

var human = Human.decode(output, 'der');
console.log(human);
/*
{ firstName: <Buffer 54 68 6f 6d 61 73>,
  lastName: <Buffer 41 6e 64 65 72 73 6f 6e>,
  age: 28,
  gender: 'male',
  bio:
   [ { time: 922820400000,
       description: <Buffer 66 72 65 65 64 6f 6d 20 6f 66 20 6d 69 6e 64> } ] }
*/

Partial decode

Its possible to parse data without stopping on first error. In order to do it, you should call:

var human = Human.decode(output, 'der', { partial: true });
console.log(human);
/*
{ result: { ... },
  errors: [ ... ] }
*/


Utility functions for working with typescript’s AST

Greenkeeper badge

Usage

This package consists of two major parts: utilities and typeguard functions. By importing the project you will get both of them.

import * as utils from "tsutils";
utils.isIdentifier(node); // typeguard
utils.getLineRanges(sourceFile); // utilities

If you don’t need everything offered by this package, you can select what should be imported. The parts that are not imported are never read from disk and may save some startup time and reduce memory consumtion.

If you only need typeguards you can explicitly import them:

import { isIdentifier } from "tsutils/typeguard";
// You can even distiguish between typeguards for nodes and types
import { isUnionTypeNode } from "tsutils/typeguard/node";
import { isUnionType } from "tsutils/typeguard/type";

If you only need the utilities you can also explicitly import them:

import { forEachComment, forEachToken } from "tsutils/util";

Typescript version dependency

This package is backwards compatible with typescript 2.8.0 at runtime although compiling might need a newer version of typescript installed.

Using typescript@next might work, but it’s not officially supported. If you encounter any bugs, please open an issue.

For compatibility with older versions of TypeScript typeguard functions are separated by TypeScript version. If you are stuck on typescript@2.8, you should import directly from the submodule for that version:

// all typeguards compatible with typescript@2.8
import { isIdentifier } from "tsutils/typeguard/2.8";
// you can even use nested submodules
import { isIdentifier } from "tsutils/typeguard/2.8/node";

// all typeguards compatible with typescript@2.9 (includes those of 2.8)
import { isIdentifier } from "tsutils/typeguard/2.9";

// always points to the latest stable version (2.9 as of writing this)
import { isIdentifier } from "tsutils/typeguard";
import { isIdentifier } from "tsutils";

// always points to the typeguards for the next TypeScript version (3.0 as of writing this)
import { isIdentifier } from "tsutils/typeguard/next";

Note that if you are also using utility functions, you should prefer the relevant submodule:

// importing directly from 'tsutils' would pull in the latest typeguards
import { forEachToken } from 'tsutils/util';
import { isIdentifier } from 'tsutils/typeguard/2.8';


util Build Status

Node.js’s util module for all engines.

This implements the Node.js util module for environments that do not have it, like browsers.

Install

You usually do not have to install util yourself. If your code runs in Node.js, util is built in. If your code runs in the browser, bundlers like browserify or webpack also include the util module.

But if none of those apply, with npm do:

npm install util

Usage

var util = require('util')
var EventEmitter = require('events')

function MyClass() { EventEmitter.call(this) }
util.inherits(MyClass, EventEmitter)

The util module uses ES5 features. If you need to support very old browsers like IE8, use a shim like es5-shim. You need both the shim and the sham versions of es5-shim.

To use util.promisify and util.callbackify, Promises must already be available. If you need to support browsers like IE11 that do not support Promises, use a shim. es6-promise is a popular one but there are many others available on npm.

API

See the Node.js util docs. util currently supports the Node 8 LTS API. However, some of the methods are outdated. The inspect and format methods included in this module are a lot more simple and barebones than the ones in Node.js.

Contributing

PRs are very welcome! The main way to contribute to util is by porting features, bugfixes and tests from Node.js. Ideally, code contributions to this module are copy-pasted from Node.js and transpiled to ES5, rather than reimplemented from scratch. Matching the Node.js code as closely as possible makes maintenance simpler when new changes land in Node.js. This module intends to provide exactly the same API as Node.js, so features that are not available in the core util module will not be accepted. Feature requests should instead be directed at nodejs/node and will be added to this module once they are implemented in Node.js.

If there is a difference in behaviour between Node.js’s util module and this module, please open an issue!



http-proxy-agent

An HTTP(s) proxy http.Agent implementation for HTTP

Build Status

This module provides an http.Agent implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in http module.

Note: For HTTP proxy usage with the https module, check out node-https-proxy-agent.

Installation

Install with npm:

$ npm install http-proxy-agent

Example

var url = require('url');
var http = require('http');
var HttpProxyAgent = require('http-proxy-agent');

// HTTP/HTTPS proxy to connect to
var proxy = process.env.http_proxy || 'http://168.63.76.32:3128';
console.log('using proxy server %j', proxy);

// HTTP endpoint for the proxy to connect to
var endpoint = process.argv[2] || 'http://nodejs.org/api/';
console.log('attempting to GET %j', endpoint);
var opts = url.parse(endpoint);

// create an instance of the `HttpProxyAgent` class with the proxy server information
var agent = new HttpProxyAgent(proxy);
opts.agent = agent;

http.get(opts, function (res) {
  console.log('"response" event!', res.headers);
  res.pipe(process.stdout);
});


is-extendable NPM version

Returns true if a value is any of the object types: array, regexp, plain object, function or date. This is useful for determining if a value can be extended, e.g. “can the value have keys?”

Install

Install with npm

$ npm i is-extendable --save

Usage

var isExtendable = require('is-extendable');

Returns true if the value is any of the following:

Notes

All objects in JavaScript can have keys, but it’s a pain to check for this, since we ether need to verify that the value is not null or undefined and:

Also note that an extendable object is not the same as an extensible object, which is one that (in es6) is not sealed, frozen, or marked as non-extensible using preventExtensions.

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue

Author

Jon Schlinkert


This file was generated by verb-cli on July 04, 2015. # gcp-metadata > Get the metadata from a Google Cloud Platform environment.

NPM Version codecov

$ npm install --save gcp-metadata
const gcpMetadata = require('gcp-metadata');

Check to see if the metadata server is available

const isAvailable = await gcpMetadata.isAvailable();

Access all metadata

const data = await gcpMetadata.instance();
console.log(data); // ... All metadata properties

Access specific properties

const data = await gcpMetadata.instance('hostname');
console.log(data); // ...Instance hostname
const projectId = await gcpMetadata.project('project-id');
console.log(projectId); // ...Project ID of the running instance

Access nested properties with the relative path

const data = await gcpMetadata.instance('service-accounts/default/email');
console.log(data); // ...Email address of the Compute identity service account

Access specific properties with query parameters

const data = await gcpMetadata.instance({
  property: 'tags',
  params: { alt: 'text' }
});
console.log(data) // ...Tags as newline-delimited list

Access with custom headers

await gcpMetadata.instance({
  headers: { 'no-trace': '1' }
}); // ...Request is untraced

Take care with large number valued properties

In some cases number valued properties returned by the Metadata Service may be too large to be representable as JavaScript numbers. In such cases we return those values as BigNumber objects (from the bignumber.js library). Numbers that fit within the JavaScript number range will be returned as normal number values.

const id = await gcpMetadata.instance('id');
console.log(id)  // ... BigNumber { s: 1, e: 18, c: [ 45200, 31799277581759 ] }
console.log(id.toString()) // ... 4520031799277581759

Environment variables

For example:

export GCE_METADATA_HOST = '169.254.169.254'

Regular expression for testing if a file path is a windows UNC file path. Can also be used as a component of another regexp via the .source property.

Visit the MSDN reference for Common Data Types 2.2.57 UNC for more information about UNC paths.

Install

Install with npm

$ npm i unc-path-regex --save

Usage

// unc-path-regex returns a function
var regex = require('unc-path-regex')();

true

Returns true for windows UNC paths:

regex.test('\\/foo/bar');
regex.test('\\\\foo/bar');
regex.test('\\\\foo\\admin$');
regex.test('\\\\foo\\admin$\\system32');
regex.test('\\\\foo\\temp');
regex.test('\\\\/foo/bar');
regex.test('\\\\\\/foo/bar');

false

Returns false for non-UNC paths:

regex.test('/foo/bar');
regex.test('/');
regex.test('/foo');
regex.test('/foo/');
regex.test('c:');
regex.test('c:.');
regex.test('c:./');
regex.test('c:./file');
regex.test('c:/');
regex.test('c:/file');

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue

Author

Jon Schlinkert


This file was generated by verb-cli on July 07, 2015. # EE First

Get the first event in a set of event emitters and event pairs, then clean up after itself.

Install

$ npm install ee-first

API

var first = require('ee-first')

first(arr, listener)

Invoke listener on the first event from the list specified in arr. arr is an array of arrays, with each array in the format [ee, ...event]. listener will be called only once, the first time any of the given events are emitted. If error is one of the listened events, then if that fires first, the listener will be given the err argument.

The listener is invoked as listener(err, ee, event, args), where err is the first argument emitted from an error event, if applicable; ee is the event emitter that fired; event is the string event name that fired; and args is an array of the arguments that were emitted on the event.

var ee1 = new EventEmitter()
var ee2 = new EventEmitter()

first([
  [ee1, 'close', 'end', 'error'],
  [ee2, 'error']
], function (err, ee, event, args) {
  // listener invoked
})

.cancel()

The group of listeners can be cancelled before being invoked and have all the event listeners removed from the underlying event emitters.

var thunk = first([
  [ee1, 'close', 'end', 'error'],
  [ee2, 'error']
], function (err, ee, event, args) {
  // listener invoked
})

// cancel and clean up
thunk.cancel()


array.prototype.flat Version Badge

npm badge

An ES2019 spec-compliant Array.prototype.flat shim/polyfill/replacement that works as far down as ES3.

This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the proposed spec.

Because Array.prototype.flat depends on a receiver (the this value), the main export takes the array to operate on as the first argument.

Getting started

npm install --save array.prototype.flat

Usage/Examples

var flat = require('array.prototype.flat');
var assert = require('assert');

var arr = [1, [2], [], 3, [[4]]];

assert.deepEqual(flat(arr, 1), [1, 2, 3, [4]]);
var flat = require('array.prototype.flat');
var assert = require('assert');
/* when Array#flat is not present */
delete Array.prototype.flat;
var shimmedFlat = flat.shim();

assert.equal(shimmedFlat, flat.getPolyfill());
assert.deepEqual(arr.flat(), flat(arr));
var flat = require('array.prototype.flat');
var assert = require('assert');
/* when Array#flat is present */
var shimmedIncludes = flat.shim();

var mapper = function (x) { return [x, 1]; };

assert.equal(shimmedIncludes, Array.prototype.flat);
assert.deepEqual(arr.flat(mapper), flat(arr, mapper));

Tests

Simply clone the repo, npm install, and run npm test



@nodelib/fs.stat

Get the status of a file with some features.

:bulb: Highlights

Wrapper over standard methods (fs.lstat, fs.stat) with some features.

Install

npm install @nodelib/fs.stat

Usage

const fsStat = require('@nodelib/fs.stat');

fsStat.stat('path').then((stat) => {
    console.log(stat); // => fs.Stats
});

API

fsStat.stat(path, options)

Returns a Promise<fs.Stats> for provided path.

fsStat.statSync(path, options)

Returns a fs.Stats for provided path.

fsStat.statCallback(path, options, callback)

Returns a fs.Stats for provided path with standard callback-style.

path

The path argument for fs.lstat or fs.stat method.

options

See options section for more detailed information.

Options

Throw an error or return information about symlink, when symlink is broken. When false, methods will be return lstat call for broken symlinks.

By default, the methods of this package follows symlinks. If you do not want it, set this option to false or use the standard method fs.lstat.

fs

By default, the built-in Node.js module (fs) is used to work with the file system. You can replace each method with your own.

interface FileSystemAdapter {
    lstat?: typeof fs.lstat;
    stat?: typeof fs.stat;
    lstatSync?: typeof fs.lstatSync;
    statSync?: typeof fs.statSync;
}

Changelog

See the Releases section of our GitHub project for changelogs for each release version.



emoji-regex Build status

emoji-regex offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard.

This repository contains a script that generates this regular expression based on the data from Unicode v12. Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard.

Installation

Via npm:

npm install emoji-regex

In Node.js:

const emojiRegex = require('emoji-regex');
// Note: because the regular expression has the global flag set, this module
// exports a function that returns the regex rather than exporting the regular
// expression itself, to make it impossible to (accidentally) mutate the
// original regular expression.

const text = `
\u{231A}: ⌚ default emoji presentation character (Emoji_Presentation)
\u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji
\u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base)
\u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier
`;

const regex = emojiRegex();
let match;
while (match = regex.exec(text)) {
  const emoji = match[0];
  console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`);
}

Console output:

Matched sequence ⌚ — code points: 1
Matched sequence ⌚ — code points: 1
Matched sequence ↔️ — code points: 2
Matched sequence ↔️ — code points: 2
Matched sequence 👩 — code points: 1
Matched sequence 👩 — code points: 1
Matched sequence 👩🏿 — code points: 2
Matched sequence 👩🏿 — code points: 2

To match emoji in their textual representation as well (i.e. emoji that are not Emoji_Presentation symbols and that aren’t forced to render as emoji by a variation selector), require the other regex:

const emojiRegex = require('emoji-regex/text.js');

Additionally, in environments which support ES2015 Unicode escapes, you may require ES2015-style versions of the regexes:

const emojiRegex = require('emoji-regex/es2015/index.js');
const emojiRegexText = require('emoji-regex/es2015/text.js');

Author

twitter/mathias
Mathias Bynens


json-schema-traverse

Traverse JSON Schema passing each schema object to callback

Build Status npm version Coverage Status

Install

npm install json-schema-traverse

Usage

const traverse = require('json-schema-traverse');
const schema = {
  properties: {
    foo: {type: 'string'},
    bar: {type: 'integer'}
  }
};

traverse(schema, {cb});
// cb is called 3 times with:
// 1. root schema
// 2. {type: 'string'}
// 3. {type: 'integer'}

// Or:

traverse(schema, {cb: {pre, post}});
// pre is called 3 times with:
// 1. root schema
// 2. {type: 'string'}
// 3. {type: 'integer'}
//
// post is called 3 times with:
// 1. {type: 'string'}
// 2. {type: 'integer'}
// 3. root schema

Callback function cb is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a {pre, post} object as cb, and then pre will be called before traversing child elements, and post will be called after all child elements have been traversed.

Callback is passed these parameters:

Traverse objects in all unknown keywords

const traverse = require('json-schema-traverse');
const schema = {
  mySchema: {
    minimum: 1,
    maximum: 2
  }
};

traverse(schema, {allKeys: true, cb});
// cb is called 2 times with:
// 1. root schema
// 2. mySchema

Without option allKeys: true callback will be called only with root schema.



duplexify

Turn a writeable and readable stream into a single streams2 duplex stream.

Similar to duplexer2 except it supports both streams2 and streams1 as input and it allows you to set the readable and writable part asynchronously using setReadable(stream) and setWritable(stream)

npm install duplexify

build status

Usage

Use duplexify(writable, readable, streamOptions) (or duplexify.obj(writable, readable) to create an object stream)

var duplexify = require('duplexify')

// turn writableStream and readableStream into a single duplex stream
var dup = duplexify(writableStream, readableStream)

dup.write('hello world') // will write to writableStream
dup.on('data', function(data) {
  // will read from readableStream
})

You can also set the readable and writable parts asynchronously

var dup = duplexify()

dup.write('hello world') // write will buffer until the writable
                         // part has been set

// wait a bit ...
dup.setReadable(readableStream)

// maybe wait some more?
dup.setWritable(writableStream)

If you call setReadable or setWritable multiple times it will unregister the previous readable/writable stream. To disable the readable or writable part call setReadable or setWritable with null.

If the readable or writable streams emits an error or close it will destroy both streams and bubble up the event. You can also explicitly destroy the streams by calling dup.destroy(). The destroy method optionally takes an error object as argument, in which case the error is emitted as part of the error event.

dup.on('error', function(err) {
  console.log('readable or writable emitted an error - close will follow')
})

dup.on('close', function() {
  console.log('the duplex stream is destroyed')
})

dup.destroy() // calls destroy on the readable and writable part (if present)

HTTP request example

Turn a node core http request into a duplex stream is as easy as

var duplexify = require('duplexify')
var http = require('http')

var request = function(opts) {
  var req = http.request(opts)
  var dup = duplexify(req)
  req.on('response', function(res) {
    dup.setReadable(res)
  })
  return dup
}

var req = request({
  method: 'GET',
  host: 'www.google.com',
  port: 80
})

req.end()
req.pipe(process.stdout)

duplexify is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.



emoji-regex Build status

emoji-regex offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard.

This repository contains a script that generates this regular expression based on the data from Unicode Technical Report #51. Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard.

Installation

Via npm:

npm install emoji-regex

In Node.js:

const emojiRegex = require('emoji-regex');
// Note: because the regular expression has the global flag set, this module
// exports a function that returns the regex rather than exporting the regular
// expression itself, to make it impossible to (accidentally) mutate the
// original regular expression.

const text = `
\u{231A}: ⌚ default emoji presentation character (Emoji_Presentation)
\u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji
\u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base)
\u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier
`;

const regex = emojiRegex();
let match;
while (match = regex.exec(text)) {
  const emoji = match[0];
  console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`);
}

Console output:

Matched sequence ⌚ — code points: 1
Matched sequence ⌚ — code points: 1
Matched sequence ↔️ — code points: 2
Matched sequence ↔️ — code points: 2
Matched sequence 👩 — code points: 1
Matched sequence 👩 — code points: 1
Matched sequence 👩🏿 — code points: 2
Matched sequence 👩🏿 — code points: 2

To match emoji in their textual representation as well (i.e. emoji that are not Emoji_Presentation symbols and that aren’t forced to render as emoji by a variation selector), require the other regex:

const emojiRegex = require('emoji-regex/text.js');

Additionally, in environments which support ES2015 Unicode escapes, you may require ES2015-style versions of the regexes:

const emojiRegex = require('emoji-regex/es2015/index.js');
const emojiRegexText = require('emoji-regex/es2015/text.js');

Author

twitter/mathias
Mathias Bynens


duplexer3 Build Status Coverage Status

Like duplexer2 but using Streams3 without readable-stream dependency

var stream = require("stream");

var duplexer3 = require("duplexer3");

var writable = new stream.Writable({objectMode: true}),
    readable = new stream.Readable({objectMode: true});

writable._write = function _write(input, encoding, done) {
  if (readable.push(input)) {
    return done();
  } else {
    readable.once("drain", done);
  }
};

readable._read = function _read(n) {
  // no-op
};

// simulate the readable thing closing after a bit
writable.once("finish", function() {
  setTimeout(function() {
    readable.push(null);
  }, 500);
});

var duplex = duplexer3(writable, readable);

duplex.on("data", function(e) {
  console.log("got data", JSON.stringify(e));
});

duplex.on("finish", function() {
  console.log("got finish event");
});

duplex.on("end", function() {
  console.log("got end event");
});

duplex.write("oh, hi there", function() {
  console.log("finished writing");
});

duplex.end(function() {
  console.log("finished ending");
});
got data "oh, hi there"
finished writing
got finish event
finished ending
got end event

Overview

This is a reimplementation of duplexer using the Streams3 API which is standard in Node as of v4. Everything largely works the same.

Installation

Available via npm:

npm i duplexer3

API

duplexer3

Creates a new DuplexWrapper object, which is the actual class that implements most of the fun stuff. All that fun stuff is hidden. DON’T LOOK.

duplexer3([options], writable, readable)
const duplex = duplexer3(new stream.Writable(), new stream.Readable());

Arguments

Options

3-clause BSD. A copy is included with the source.

Contact



vary

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Manipulate the HTTP Vary header

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install vary

API

var vary = require('vary')

vary(res, field)

Adds the given header field to the Vary response header of res. This can be a string of a single field, a string of a valid Vary header, or an array of multiple fields.

This will append the header if not already listed, otherwise leaves it listed in the current location.

// Append "Origin" to the Vary header of the response
vary(res, 'Origin')

vary.append(header, field)

Adds the given header field to the Vary response header string header. This can be a string of a single field, a string of a valid Vary header, or an array of multiple fields.

This will append the header if not already listed, otherwise leaves it listed in the current location. The new header string is returned.

// Get header string appending "Origin" to "Accept, User-Agent"
vary.append('Accept, User-Agent', 'Origin')

Examples

Updating the Vary header when content is based on it

var http = require('http')
var vary = require('vary')

http.createServer(function onRequest (req, res) {
  // about to user-agent sniff
  vary(res, 'User-Agent')

  var ua = req.headers['user-agent'] || ''
  var isMobile = /mobi|android|touch|mini/i.test(ua)

  // serve site, depending on isMobile
  res.setHeader('Content-Type', 'text/html')
  res.end('You are (probably) ' + (isMobile ? '' : 'not ') + 'a mobile user')
})

Testing

$ npm test

An ini format parser and serializer for node.

Sections are treated as nested objects. Items before the first heading are saved on the object directly.

Usage

Consider an ini-file config.ini that looks like this:

; this comment is being ignored
scope = global

[database]
user = dbuser
password = dbpassword
database = use_this_database

[paths.default]
datadir = /var/lib/data
array[] = first value
array[] = second value
array[] = third value

You can read, manipulate and write the ini-file like so:

var fs = require('fs')
  , ini = require('ini')

var config = ini.parse(fs.readFileSync('./config.ini', 'utf-8'))

config.scope = 'local'
config.database.database = 'use_another_database'
config.paths.default.tmpdir = '/tmp'
delete config.paths.default.datadir
config.paths.default.array.push('fourth value')

fs.writeFileSync('./config_modified.ini', ini.stringify(config, { section: 'section' }))

This will result in a file called config_modified.ini being written to the filesystem with the following content:

[section]
scope=local
[section.database]
user=dbuser
password=dbpassword
database=use_another_database
[section.paths.default]
tmpdir=/tmp
array[]=first value
array[]=second value
array[]=third value
array[]=fourth value

API

decode(inistring)

Decode the ini-style formatted inistring into a nested object.

parse(inistring)

Alias for decode(inistring)

encode(object, options)

Encode the object object into an ini-style formatted string. If the optional parameter section is given, then all top-level properties of the object are put into this section and the section-string is prepended to all sub-sections, see the usage example above.

The options object may contain the following:

For backwards compatibility reasons, if a string options is passed in, then it is assumed to be the section value.

stringify(object, options)

Alias for encode(object, [options])

safe(val)

Escapes the string val such that it is safe to be used as a key or value in an ini-file. Basically escapes quotes. For example

ini.safe('"unsafe string"')

would result in

"\"unsafe string\""

unsafe(val)

Unescapes the string val



Installation

npm install --save @types/node



Summary

This package contains type definitions for Node.js (http://nodejs.org/).



Details

Files were exported from https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/node.

Additional Details



Credits

These definitions were written by Microsoft TypeScript, DefinitelyTyped, Alberto Schiabel, Alexander T., Alvis HT Tang, Andrew Makarov, Benjamin Toueg, Bruno Scheufler, Chigozirim C., David Junger, Deividas Bakanas, Eugene Y. Q. Shen, Flarna, Hannes Magnusson, Hoàng Văn Khải, Huw, Kelvin Jin, Klaus Meinhardt, Lishude, Mariusz Wiktorczyk, Mohsen Azimi, Nicolas Even, Nikita Galkin, Parambir Singh, Sebastian Silbermann, Simon Schick, Thomas den Hollander, Wilco Bakker, wwwy3y3, Samuel Ainsworth, Kyle Uehlein, Jordi Oliveras Rovira, Thanik Bhongbhibhat, Marcin Kopacz, Trivikram Kamat, Minh Son Nguyen, Junxiao Shi, Ilia Baryshnikov, ExE Boss, Surasak Chaisurin, Piotr Błażejewicz, Anna Henningsen, Jason Kwok, and Victor Perin.

#define-properties Version Badge

npm badge

browser support

Define multiple non-enumerable properties at once. Uses Object.defineProperty when available; falls back to standard assignment in older engines. Existing properties are not overridden. Accepts a map of property names to a predicate that, when true, force-overrides.

Example

var define = require('define-properties');
var assert = require('assert');

var obj = define({ a: 1, b: 2 }, {
    a: 10,
    b: 20,
    c: 30
});
assert(obj.a === 1);
assert(obj.b === 2);
assert(obj.c === 30);
if (define.supportsDescriptors) {
    assert.deepEqual(Object.keys(obj), ['a', 'b']);
    assert.deepEqual(Object.getOwnPropertyDescriptor(obj, 'c'), {
        configurable: true,
        enumerable: false,
        value: 30,
        writable: false
    });
}

Then, with predicates:

var define = require('define-properties');
var assert = require('assert');

var obj = define({ a: 1, b: 2, c: 3 }, {
    a: 10,
    b: 20,
    c: 30
}, {
    a: function () { return false; },
    b: function () { return true; }
});
assert(obj.a === 1);
assert(obj.b === 20);
assert(obj.c === 3);
if (define.supportsDescriptors) {
    assert.deepEqual(Object.keys(obj), ['a', 'c']);
    assert.deepEqual(Object.getOwnPropertyDescriptor(obj, 'b'), {
        configurable: true,
        enumerable: false,
        value: 20,
        writable: false
    });
}

Tests

Simply clone the repo, npm install, and run npm test



assign-symbols NPM version

Assign the enumerable es6 Symbol properties from an object (or objects) to the first object passed on the arguments. Can be used as a supplement to other extend, assign or merge methods as a polyfill for the Symbols part of the es6 Object.assign method.

From the Mozilla Developer docs for Symbol:

A symbol is a unique and immutable data type and may be used as an identifier for object properties. The symbol object is an implicit object wrapper for the symbol primitive data type.

Install

Install with npm

$ npm i assign-symbols --save

Usage

var assignSymbols = require('assign-symbols');
var obj = {};

var one = {};
var symbolOne = Symbol('aaa');
one[symbolOne] = 'bbb';

var two = {};
var symbolTwo = Symbol('ccc');
two[symbolTwo] = 'ddd';

assignSymbols(obj, one, two);

console.log(obj[symbolOne]);
//=> 'bbb'
console.log(obj[symbolTwo]);
//=> 'ddd'

Similar projects

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb-cli on November 06, 2015. # flat-cache > A stupidly simple key/value storage using files to persist the data

NPM Version Build Status

install

npm i --save flat-cache

Usage

var flatCache = require('flat-cache')
// loads the cache, if one does not exists for the given
// Id a new one will be prepared to be created
var cache = flatCache.load('cacheId');

// sets a key on the cache
cache.setKey('key', { foo: 'var' });

// get a key from the cache
cache.getKey('key') // { foo: 'var' }

// fetch the entire persisted object
cache.all() // { 'key': { foo: 'var' } }

// remove a key
cache.removeKey('key'); // removes a key from the cache

// save it to disk
cache.save(); // very important, if you don't save no changes will be persisted.
// cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys

// loads the cache from a given directory, if one does
// not exists for the given Id a new one will be prepared to be created
var cache = flatCache.load('cacheId', path.resolve('./path/to/folder'));

// The following methods are useful to clear the cache
// delete a given cache
flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists.

// delete all cache
flatCache.clearAll(); // remove the cache directory

Motivation for this module

I needed a super simple and dumb in-memory cache with optional disk persistance in order to make a script that will beutify files with esformatter only execute on the files that were changed since the last run. To make that possible we need to store the fileSize and modificationTime of the files. So a simple key/value storage was needed and Bam! this module was born.

Important notes

Changelog

changelog



eslint-visitor-keys

npm version Downloads/month Build Status Dependency Status

Constants and utilities about visitor keys to traverse AST.

💿 Installation

Use npm to install.

$ npm install eslint-visitor-keys

Requirements

📖 Usage

const evk = require("eslint-visitor-keys")

evk.KEYS

type: { [type: string]: string[] | undefined }

Visitor keys. This keys are frozen.

This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.

For example:

console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]

evk.getKeys(node)

type: (node: object) => string[]

Get the visitor keys of a given AST node.

This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.

This will be used to traverse unknown nodes.

For example:

const node = {
    type: "AssignmentExpression",
    left: { type: "Identifier", name: "foo" },
    right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]

evk.unionWith(additionalKeys)

type: (additionalKeys: object) => { [type: string]: string[] | undefined }

Make the union set with evk.KEYS and the given keys.

For example:

console.log(evk.unionWith({
    MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }

📰 Change log

See GitHub releases.

🍻 Contributing

Welcome. See ESLint contribution guidelines.

Development commands



eslint-visitor-keys

npm version Downloads/month Build Status Dependency Status

Constants and utilities about visitor keys to traverse AST.

💿 Installation

Use npm to install.

$ npm install eslint-visitor-keys

Requirements

📖 Usage

const evk = require("eslint-visitor-keys")

evk.KEYS

type: { [type: string]: string[] | undefined }

Visitor keys. This keys are frozen.

This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.

For example:

console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]

evk.getKeys(node)

type: (node: object) => string[]

Get the visitor keys of a given AST node.

This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.

This will be used to traverse unknown nodes.

For example:

const node = {
    type: "AssignmentExpression",
    left: { type: "Identifier", name: "foo" },
    right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]

evk.unionWith(additionalKeys)

type: (additionalKeys: object) => { [type: string]: string[] | undefined }

Make the union set with evk.KEYS and the given keys.

For example:

console.log(evk.unionWith({
    MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }

📰 Change log

See GitHub releases.

🍻 Contributing

Welcome. See ESLint contribution guidelines.

Development commands



eslint-visitor-keys

npm version Downloads/month Build Status Dependency Status

Constants and utilities about visitor keys to traverse AST.

💿 Installation

Use npm to install.

$ npm install eslint-visitor-keys

Requirements

📖 Usage

const evk = require("eslint-visitor-keys")

evk.KEYS

type: { [type: string]: string[] | undefined }

Visitor keys. This keys are frozen.

This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.

For example:

console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]

evk.getKeys(node)

type: (node: object) => string[]

Get the visitor keys of a given AST node.

This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.

This will be used to traverse unknown nodes.

For example:

const node = {
    type: "AssignmentExpression",
    left: { type: "Identifier", name: "foo" },
    right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]

evk.unionWith(additionalKeys)

type: (additionalKeys: object) => { [type: string]: string[] | undefined }

Make the union set with evk.KEYS and the given keys.

For example:

console.log(evk.unionWith({
    MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }

📰 Change log

See GitHub releases.

🍻 Contributing

Welcome. See ESLint contribution guidelines.

Development commands



eslint-visitor-keys

npm version Downloads/month Build Status Dependency Status

Constants and utilities about visitor keys to traverse AST.

💿 Installation

Use npm to install.

$ npm install eslint-visitor-keys

Requirements

📖 Usage

const evk = require("eslint-visitor-keys")

evk.KEYS

type: { [type: string]: string[] | undefined }

Visitor keys. This keys are frozen.

This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.

For example:

console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]

evk.getKeys(node)

type: (node: object) => string[]

Get the visitor keys of a given AST node.

This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.

This will be used to traverse unknown nodes.

For example:

const node = {
    type: "AssignmentExpression",
    left: { type: "Identifier", name: "foo" },
    right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]

evk.unionWith(additionalKeys)

type: (additionalKeys: object) => { [type: string]: string[] | undefined }

Make the union set with evk.KEYS and the given keys.

For example:

console.log(evk.unionWith({
    MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }

📰 Change log

See GitHub releases.

🍻 Contributing

Welcome. See ESLint contribution guidelines.

Development commands



content-type

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Create and parse HTTP Content-Type header according to RFC 7231

Installation

$ npm install content-type

API

var contentType = require('content-type')

contentType.parse(string)

var obj = contentType.parse('image/svg+xml; charset=utf-8')

Parse a content type string. This will return an object with the following properties (examples are shown for the string 'image/svg+xml; charset=utf-8'):

Throws a TypeError if the string is missing or invalid.

contentType.parse(req)

var obj = contentType.parse(req)

Parse the content-type header from the given req. Short-cut for contentType.parse(req.headers['content-type']).

Throws a TypeError if the Content-Type header is missing or invalid.

contentType.parse(res)

var obj = contentType.parse(res)

Parse the content-type header set on the given res. Short-cut for contentType.parse(res.getHeader('content-type')).

Throws a TypeError if the Content-Type header is missing or invalid.

contentType.format(obj)

var str = contentType.format({type: 'image/svg+xml'})

Format an object into a content type string. This will return a string of the content type for the given object with the following properties (examples are shown that produce the string 'image/svg+xml; charset=utf-8'):

Throws a TypeError if the object contains an invalid type or parameter names.



has-values NPM version NPM downloads Build Status

Returns true if any values exist, false if empty. Works for booleans, functions, numbers, strings, nulls, objects and arrays.

Install

Install with npm:

$ npm install has-values --save

Usage

var hasValue = require('has-values');

hasValue('a');
//=> true

hasValue('');
//=> false

hasValue(1);
//=> true

hasValue(0);
//=> false

hasValue(0, true); // treat zero as a value
//=> true

hasValue({a: 'a'}});
//=> true

hasValue({}});
//=> false

hasValue(['a']);
//=> true

hasValue([]);
//=> false

hasValue(function(foo) {}); // function length/arity
//=> true

hasValue(function() {});
//=> false

hasValue(true);
hasValue(false);
//=> true

isEmpty

To test for empty values, do:

function isEmpty(o, isZero) {
  return !hasValue(o, isZero);
}

You might also be interested in these projects:

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

Generate readme and API documentation with verb:

$ npm install verb && npm run docs

Or, if verb is installed globally:

$ verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb, v, on March 27, 2016. # color-convert

Build Status

Color-convert is a color conversion library for JavaScript and node. It converts all ways between rgb, hsl, hsv, hwb, cmyk, ansi, ansi16, hex strings, and CSS keywords (will round to closest):

var convert = require('color-convert');

convert.rgb.hsl(140, 200, 100);             // [96, 48, 59]
convert.keyword.rgb('blue');                // [0, 0, 255]

var rgbChannels = convert.rgb.channels;     // 3
var cmykChannels = convert.cmyk.channels;   // 4
var ansiChannels = convert.ansi16.channels; // 1


Install

npm install color-convert


API

Simply get the property of the from and to conversion that you’re looking for.

All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on .raw to the function.

All ‘from’ functions have a hidden property called .channels that indicates the number of channels the function expects (not including alpha).

var convert = require('color-convert');

// Hex to LAB
convert.hex.lab('DEADBF');         // [ 76, 21, -2 ]
convert.hex.lab.raw('DEADBF');     // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ]

// RGB to CMYK
convert.rgb.cmyk(167, 255, 4);     // [ 35, 0, 98, 0 ]
convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ]

Arrays

All functions that accept multiple arguments also support passing an array.

Note that this does not apply to functions that convert from a color that only requires one value (e.g. keyword, ansi256, hex, etc.)

var convert = require('color-convert');

convert.rgb.hex(123, 45, 67);      // '7B2D43'
convert.rgb.hex([123, 45, 67]);    // '7B2D43'

Routing

Conversions that don’t have an explicitly defined conversion (in conversions.js), but can be converted by means of sub-conversions (e.g. XYZ -> RGB -> CMYK), are automatically routed together. This allows just about any color model supported by color-convert to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> LAB -> XYZ -> RGB -> Hex).

Keep in mind that extensive conversions may result in a loss of precision, and exist only to be complete. For a list of “direct” (single-step) conversions, see conversions.js.



Contribute

If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request.





color-convert

Build Status

Color-convert is a color conversion library for JavaScript and node. It converts all ways between rgb, hsl, hsv, hwb, cmyk, ansi, ansi16, hex strings, and CSS keywords (will round to closest):

var convert = require('color-convert');

convert.rgb.hsl(140, 200, 100);             // [96, 48, 59]
convert.keyword.rgb('blue');                // [0, 0, 255]

var rgbChannels = convert.rgb.channels;     // 3
var cmykChannels = convert.cmyk.channels;   // 4
var ansiChannels = convert.ansi16.channels; // 1


Install

npm install color-convert


API

Simply get the property of the from and to conversion that you’re looking for.

All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on .raw to the function.

All ‘from’ functions have a hidden property called .channels that indicates the number of channels the function expects (not including alpha).

var convert = require('color-convert');

// Hex to LAB
convert.hex.lab('DEADBF');         // [ 76, 21, -2 ]
convert.hex.lab.raw('DEADBF');     // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ]

// RGB to CMYK
convert.rgb.cmyk(167, 255, 4);     // [ 35, 0, 98, 0 ]
convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ]

Arrays

All functions that accept multiple arguments also support passing an array.

Note that this does not apply to functions that convert from a color that only requires one value (e.g. keyword, ansi256, hex, etc.)

var convert = require('color-convert');

convert.rgb.hex(123, 45, 67);      // '7B2D43'
convert.rgb.hex([123, 45, 67]);    // '7B2D43'

Routing

Conversions that don’t have an explicitly defined conversion (in conversions.js), but can be converted by means of sub-conversions (e.g. XYZ -> RGB -> CMYK), are automatically routed together. This allows just about any color model supported by color-convert to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> LAB -> XYZ -> RGB -> Hex).

Keep in mind that extensive conversions may result in a loss of precision, and exist only to be complete. For a list of “direct” (single-step) conversions, see conversions.js.



Contribute

If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request.





repeat-element NPM version NPM monthly downloads NPM total downloads Linux Build Status

Create an array by repeating the given value n times.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save repeat-element

Usage

const repeat = require('repeat-element');

repeat('a', 5);
//=> ['a', 'a', 'a', 'a', 'a']

repeat('a', 1);
//=> ['a']

repeat('a', 0);
//=> []

repeat(null, 5)
//» [ null, null, null, null, null ]

repeat({some: 'object'}, 5)
//» [ { some: 'object' },
//    { some: 'object' },
//    { some: 'object' },
//    { some: 'object' },
//    { some: 'object' } ]

repeat(5, 5)
//» [ 5, 5, 5, 5, 5 ]

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Commits Contributor
17 jonschlinkert
3 LinusU
1 architectcodes

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on August 19, 2018. # ansi-align

align-text with ANSI support for CLIs

Build Status Coverage Status Standard Version Greenkeeper badge

Easily center- or right- align a block of text, carefully ignoring ANSI escape codes.

E.g. turn this:

ansi text block no alignment :(

Into this:

ansi text block center aligned!

Install

npm install --save ansi-align
var ansiAlign = require('ansi-align')

API

ansiAlign(text, [opts])

Align the given text per the line with the greatest string-width, returning a new string (or array).

Arguments

Options

ansiAlign.center(text)

Alias for ansiAlign(text, { align: 'center' }).

ansiAlign.right(text)

Alias for ansiAlign(text, { align: 'right' }).

ansiAlign.left(text)

Alias for ansiAlign(text, { align: 'left' }), which is a no-op.

Similar Packages



array-includes Version Badge

npm badge

An ES7/ES2016 spec-compliant Array.prototype.includes shim/polyfill/replacement that works as far down as ES3.

This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the proposed spec.

Because Array.prototype.includes depends on a receiver (the this value), the main export takes the array to operate on as the first argument.

Getting started

npm install --save array-includes

Usage

Basic usage: includes(array, value[, fromIndex=0])

var includes = require('array-includes');
var assert = require('assert');
var arr = [ 'one', 'two' ];

includes(arr, 'one'); // true
includes(arr, 'three'); // false
includes(arr, 'one', 1); // false

Example

var arr = [
    1,
    'foo',
    NaN,
    -0
];

assert.equal(arr.indexOf(0) > -1, true);
assert.equal(arr.indexOf(-0) > -1, true);
assert.equal(includes(arr, 0), true);
assert.equal(includes(arr, -0), true);

assert.equal(arr.indexOf(NaN) > -1, false);
assert.equal(includes(arr, NaN), true);

assert.equal(includes(arr, 'foo', 0), true);
assert.equal(includes(arr, 'foo', 1), true);
assert.equal(includes(arr, 'foo', 2), false);
/* when Array#includes is not present */
delete Array.prototype.includes;
var shimmedIncludes = includes.shim();

assert.equal(shimmedIncludes, includes.getPolyfill());
assert.equal(arr.includes('foo', 1), includes(arr, 'foo', 1));
/* when Array#includes is present */
var shimmedIncludes = includes.shim();

assert.equal(shimmedIncludes, Array.prototype.includes);
assert.equal(arr.includes(1, 'foo'), includes(arr, 1, 'foo'));

Tests

Simply clone the repo, npm install, and run npm test



cliui

ci NPM version Conventional Commits nycrc config on GitHub

easily create complex multi-column command-line-interfaces.

Example

const ui = require('cliui')()

ui.div('Usage: $0 [command] [options]')

ui.div({
  text: 'Options:',
  padding: [2, 0, 1, 0]
})

ui.div(
  {
    text: "-f, --file",
    width: 20,
    padding: [0, 4, 0, 4]
  },
  {
    text: "the file to load." +
      chalk.green("(if this description is long it wraps).")
    ,
    width: 20
  },
  {
    text: chalk.red("[required]"),
    align: 'right'
  }
)

console.log(ui.toString())

As of v7 cliui supports Deno and ESM:

import cliui from "https://deno.land/x/cliui/deno.ts";

const ui = cliui({})

ui.div('Usage: $0 [command] [options]')

ui.div({
  text: 'Options:',
  padding: [2, 0, 1, 0]
})

ui.div({
  text: "-f, --file",
  width: 20,
  padding: [0, 4, 0, 4]
})

console.log(ui.toString())

Layout DSL

cliui exposes a simple layout DSL:

If you create a single ui.div, passing a string rather than an object:

as an example…

var ui = require('./')({
  width: 60
})

ui.div(
  'Usage: node ./bin/foo.js\n' +
  '  <regex>\t  provide a regex\n' +
  '  <glob>\t  provide a glob\t [required]'
)

console.log(ui.toString())

will output:

Usage: node ./bin/foo.js
  <regex>  provide a regex
  <glob>   provide a glob          [required]

Methods

cliui = require('cliui')

cliui({width: integer})

Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window’s width and use it, and if that doesn’t work, width will be set to 80.

cliui({wrap: boolean})

Enable or disable the wrapping of text in a column.

cliui.div(column, column, column)

Create a row with any number of columns, a column can either be a string, or an object with the following options:

cliui.span(column, column, column)

Similar to div, except the next row will be appended without a new line being created.

cliui.resetOutput()

Resets the UI elements of the current cliui instance, maintaining the values set for width and wrap.



readable-stream

Node-core v8.11.1 streams for userland Build Status

NPM NPM

Sauce Test Status

npm install --save readable-stream

Node-core streams for userland

This package is a mirror of the Streams2 and Streams3 implementations in Node-core.

Full documentation may be found on the Node.js website.

If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use readable-stream only and avoid the “stream” module in Node-core, for background see this blogpost.

As of version 2.0.0 readable-stream uses semantic versioning.



Streams Working Group

readable-stream is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:

## Team Members



isobject NPM version NPM downloads Build Status

Returns true if the value is an object and not an array or null.

Install

Install with npm:

$ npm install isobject --save

Use is-plain-object if you want only objects that are created by the Object constructor.

Install

Install with npm:

$ npm install isobject

Install with bower

$ bower install isobject

Usage

var isObject = require('isobject');

True

All of the following return true:

isObject({});
isObject(Object.create({}));
isObject(Object.create(Object.prototype));
isObject(Object.create(null));
isObject({});
isObject(new Foo);
isObject(/foo/);

False

All of the following return false:

isObject();
isObject(function () {});
isObject(1);
isObject([]);
isObject(undefined);
isObject(null);

You might also be interested in these projects:

merge-deep: Recursively merge values in a javascript object. | homepage

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

Generate readme and API documentation with verb:

$ npm install verb && npm run docs

Or, if verb is installed globally:

$ verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb, v0.9.0, on April 25, 2016. Build Status dependency status dev dependency status



extend() for Node.js Version Badge

node-extend is a port of the classic extend() method from jQuery. It behaves as you expect. It is simple, tried and true.

Notes:

Installation

This package is available on npm as: extend

npm install extend

Usage

Syntax: extend ( [deep], target, object1, [objectN] )

Extend one object with one or more others, returning the modified object.

Example:

var extend = require('extend');
extend(targetObject, object1, object2);

Keep in mind that the target object will be modified, and will be returned from extend().

If a boolean true is specified as the first argument, extend performs a deep copy, recursively copying any objects it finds. Otherwise, the copy will share structure with the original object(s). Undefined properties are not copied. However, properties inherited from the object’s prototype will be copied over. Warning: passing false as the first argument is not supported.

Arguments

Acknowledgements

All credit to the jQuery authors for perfecting this amazing utility.

Ported to Node.js by Stefan Thomas with contributions by Jonathan Buchanan and Jordan Harband.



Multimap - Map which Allow Multiple Values for the same Key

NPM version Build Status

Install

npm install multimap --save

Usage

If you’d like to use native version when it exists and fallback to polyfill if it doesn’t, but without implementing Map on global scope, do:

var Multimap = require('multimap');
var m = new Multimap();

If the global es6 Map exists or Multimap.Map is set, Multimap will use the Map as inner store, that means Object can be used as key.

var Multimap = require('multimap');

// if harmony is on
/* nothing need to do */
// or if you are using es6-shim
Multimap.Map = ShimMap;

var m = new Multimap();
var key = {};
m.set(key, 'one');

Otherwise, an object will be used, all the keys will be transformed into string.

In Modern Browser

Just download the index.js as Multimap.js.

<script src=Multimap.js"></script>
<script>
var map = new Multimap([['a', 1], ['b', 2], ['c', 3]]);
map = map.set('b', 20);
map.get('b'); // [2, 20]
</script>

Or use as an AMD loader:

require(['./Multimap.js'], function (Multimap) {
  var map = new Multimap([['a', 1], ['b', 2], ['c', 3]]);
  map = map.set('b', 20);
  map.get('b'); // [2, 20]
});

API

Following shows how to use Multimap:

var Multimap = require('multimap');

var map = new Multimap([['a', 'one'], ['b', 1], ['a', 'two'], ['b', 2]]);

map.size;                 // 4
map.count;                // 2

map.get('a');             // ['one', 'two']
map.get('b');             // [1, 2]

map.has('a');             // true
map.has('foo');           // false

map.has('a', 'one');      // true
map.has('b', 3);          // false

map.set('a', 'three');
map.size;                 // 5
map.count;                // 2
map.get('a');             // ['one', 'two', 'three']

map.set('b', 3, 4);
map.size;                 // 7
map.count;                // 2

map.delete('a', 'three'); // true
map.delete('x');          // false
map.delete('a', 'four');  // false
map.delete('b');          // true

map.size;                 // 2
map.count;                // 1

map.set('b', 1, 2);
map.size;                 // 4
map.count;                // 2


map.forEach(function (value, key) {
  // iterates { 'one', 'a' }, { 'two', 'a' }, { 1, b }, { 2, 'b' }
});

map.forEachEntry(function (entry, key) {
  // iterates {['one', 'two'], 'a' }, {[1, 2], 'b' }
});


var keys = map.keys();      // iterator with ['a', 'b']
keys.next().value;          // 'a'
var values = map.values();  // iterator ['one', 'two', 1, 2]

map.clear();                // undefined
map.size;                   // 0
map.count;                  // 0


@nodelib/fs.stat

Get the status of a file with some features.

:bulb: Highlights

Wrapper around standard method fs.lstat and fs.stat with some features.

Install

npm install @nodelib/fs.stat

Usage

import * as fsStat from '@nodelib/fs.stat';

fsStat.stat('path', (error, stats) => { /* … */ });

API

.stat(path, optionsOrSettings, callback)

Returns an instance of fs.Stats class for provided path with standard callback-style.

fsStat.stat('path', (error, stats) => { /* … */ });
fsStat.stat('path', {}, (error, stats) => { /* … */ });
fsStat.stat('path', new fsStat.Settings(), (error, stats) => { /* … */ });

.statSync(path, optionsOrSettings)

Returns an instance of fs.Stats class for provided path.

const stats = fsStat.stat('path');
const stats = fsStat.stat('path', {});
const stats = fsStat.stat('path', new fsStat.Settings());

path

A path to a file. If a URL is provided, it must use the file: protocol.

optionsOrSettings

An Options object or an instance of Settings class.

:book: When you pass a plain object, an instance of the Settings class will be created automatically. If you plan to call the method frequently, use a pre-created instance of the Settings class.

Settings(options)

A class of full settings of the package.

const settings = new fsStat.Settings({ followSymbolicLink: false });

const stats = fsStat.stat('path', settings);

Options

Follow symbolic link or not. Call fs.stat on symbolic link if true.

Mark symbolic link by setting the return value of isSymbolicLink function to always true (even after fs.stat).

:book: Can be used if you want to know what is hidden behind a symbolic link, but still continue to know that it is a symbolic link.

Throw an error when symbolic link is broken if true or safely return lstat call if false.

fs

By default, the built-in Node.js module (fs) is used to work with the file system. You can replace any method with your own.

interface FileSystemAdapter {
    lstat?: typeof fs.lstat;
    stat?: typeof fs.stat;
    lstatSync?: typeof fs.lstatSync;
    statSync?: typeof fs.statSync;
}

const settings = new fsStat.Settings({
    fs: { lstat: fakeLstat }
});

Changelog

See the Releases section of our GitHub project for changelog for each release version.

write-file-atomic

This is an extension for node’s fs.writeFile that makes its operation atomic and allows you set ownership (uid/gid of the file).

var writeFileAtomic = require(‘write-file-atomic’)
writeFileAtomic(filename, data, options, callback)

Atomically and asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer.

The file is initially named filename + "." + murmurhex(__filename, process.pid, ++invocations). Note that require('worker_threads').threadId is used in addition to process.pid if running inside of a worker thread. If writeFile completes successfully then, if passed the chown option it will change the ownership of the file. Finally it renames the file back to the filename you specified. If it encounters errors at any of these steps it will attempt to unlink the temporary file and then pass the error back to the caller. If multiple writes are concurrently issued to the same file, the write operations are put into a queue and serialized in the order they were called, using Promises. Writes to different files are still executed in parallel.

If provided, the chown option requires both uid and gid properties or else you’ll get an error. If chown is not specified it will default to using the owner of the previous file. To prevent chown from being ran you can also pass false, in which case the file will be created with the current user’s credentials.

If mode is not specified, it will default to using the permissions from an existing file, if any. Expicitly setting this to false remove this default, resulting in a file created with the system default permissions.

If options is a String, it’s assumed to be the encoding option. The encoding option is ignored if data is a buffer. It defaults to ‘utf8’.

If the fsync option is false, writeFile will skip the final fsync call.

If the tmpfileCreated option is specified it will be called with the name of the tmpfile when created.

Example:

writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}}, function (err) {
  if (err) throw err;
  console.log('It\'s saved!');
});

This function also supports async/await:

(async () => {
  try {
    await writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}});
    console.log('It\'s saved!');
  } catch (err) {
    console.error(err);
    process.exit(1);
  }
})();

var writeFileAtomicSync = require(‘write-file-atomic’).sync
writeFileAtomicSync(filename, data, options)

The synchronous version of writeFileAtomic.



run-parallel travis npm downloads javascript style guide

Run an array of functions in parallel

parallel Sauce Test Status

install

npm install run-parallel

usage

parallel(tasks, callback)

Run the tasks array of functions in parallel, without waiting until the previous function has completed. If any of the functions pass an error to its callback, the main callback is immediately called with the value of the error. Once the tasks have completed, the results are passed to the final callback as an array.

It is also possible to use an object instead of an array. Each property will be run as a function and the results will be passed to the final callback as an object instead of an array. This can be a more readable way of handling the results.

arguments
example
var parallel = require('run-parallel')

parallel([
  function (callback) {
    setTimeout(function () {
      callback(null, 'one')
    }, 200)
  },
  function (callback) {
    setTimeout(function () {
      callback(null, 'two')
    }, 100)
  }
],
// optional callback
function (err, results) {
  // the results array will equal ['one','two'] even though
  // the second function had a shorter timeout.
})

This module is basically equavalent to async.parallel, but it’s handy to just have the one function you need instead of the kitchen sink. Modularity! Especially handy if you’re serving to the browser and need to reduce your javascript bundle size.

Works great in the browser with browserify!

see also



flatted

snow flake
snow flake

Social Media Photo by Matt Seymour on Unsplash

A super light (0.5K) and fast circular JSON parser, directly from the creator of CircularJSON.

Now available also for PHP.

npm i flatted

Usable via CDN or as regular module.

// ESM
import {parse, stringify} from 'flatted';

// CJS
const {parse, stringify} = require('flatted');

const a = [{}];
a[0].a = a;
a.push(a);

stringify(a); // [["1","0"],{"a":"0"}]

Flatted VS JSON

As it is for every other specialized format capable of serializing and deserializing circular data, you should never JSON.parse(Flatted.stringify(data)), and you should never Flatted.parse(JSON.stringify(data)).

The only way this could work is to Flatted.parse(Flatted.stringify(data)), as it is also for CircularJSON or any other, otherwise there’s no granted data integrity.

Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won’t be serialized and unserialized as expected.

New in V1: Exact same JSON API

Compatibility

All ECMAScript engines compatible with Map, Set, Object.keys, and Array.prototype.reduce will work, even if polyfilled.

How does it work ?

While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. *

Once parsed, all indexes will be replaced through the flattened collection.

* represented as string to avoid conflicts with numbers

// logic example
var a = [{one: 1}, {two: '2'}];
a[0].a = a;
// a is the main object, will be at index '0'
// {one: 1} is the second object, index '1'
// {two: '2'} the third, in '2', and it has a string
// which will be found at index '3'

Flatted.stringify(a);
// [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"]
// a[one,two]    {one: 1, a}    {two: '2'}  '2'


y18n

NPM version js-standard-style Conventional Commits

The bare-bones internationalization library used by yargs.

Inspired by i18n.

Examples

simple string translation:

const __ = require('y18n')().__;

console.log(__('my awesome string %s', 'foo'));

output:

my awesome string foo

using tagged template literals

const __ = require('y18n')().__;

const str = 'foo';

console.log(__`my awesome string ${str}`);

output:

my awesome string foo

pluralization support:

const __n = require('y18n')().__n;

console.log(__n('one fish %s', '%d fishes %s', 2, 'foo'));

output:

2 fishes foo

Deno Example

As of v5 y18n supports Deno:

import y18n from "https://deno.land/x/y18n/deno.ts";

const __ = y18n({
  locale: 'pirate',
  directory: './test/locales'
}).__

console.info(__`Hi, ${'Ben'} ${'Coe'}!`)

You will need to run with --allow-read to load alternative locales.

JSON Language Files

The JSON language files should be stored in a ./locales folder. File names correspond to locales, e.g., en.json, pirate.json.

When strings are observed for the first time they will be added to the JSON file corresponding to the current locale.

Methods

require(‘y18n’)(config)

Create an instance of y18n with the config provided, options include:

y18n.__(str, arg, arg, arg)

Print a localized string, %s will be replaced with args.

This function can also be used as a tag for a template literal. You can use it like this: __`hello ${‘world’}`. This will be equivalent to __('hello %s', 'world').

y18n.__n(singularString, pluralString, count, arg, arg, arg)

Print a localized string with appropriate pluralization. If %d is provided in the string, the count will replace this placeholder.

y18n.setLocale(str)

Set the current locale being used.

y18n.getLocale()

What locale is currently being used?

y18n.updateLocale(obj)

Update the current locale with the key value pairs in obj.

Libraries in this ecosystem make a best effort to track Node.js’ release schedule. Here’s a post on why we think this is important.

ISC



abort-controller

npm version Downloads/month Build Status Coverage Status Dependency Status

An implementation of WHATWG AbortController interface.

import AbortController from "abort-controller"

const controller = new AbortController()
const signal = controller.signal

signal.addEventListener("abort", () => {
    console.log("aborted!")
})

controller.abort()

https://jsfiddle.net/1r2994qp/1/

💿 Installation

Use npm to install then use a bundler.

npm install abort-controller

Or download from dist directory.

📖 Usage

Basic

import AbortController from "abort-controller"
// or
const AbortController = require("abort-controller")

// or UMD version defines a global variable:
const AbortController = window.AbortControllerShim

If your bundler recognizes browser field of package.json, the imported AbortController is the native one and it doesn’t contain shim (even if the native implementation was nothing). If you wanted to polyfill AbortController for IE, use abort-controller/polyfill.

Polyfilling

Importing abort-controller/polyfill assigns the AbortController shim to the AbortController global variable if the native implementation was nothing.

import "abort-controller/polyfill"
// or
require("abort-controller/polyfill")

API

AbortController

https://dom.spec.whatwg.org/#interface-abortcontroller

controller.signal

The AbortSignal object which is associated to this controller.

controller.abort()

Notify abort event to listeners that the signal has.

📰 Changelog

🍻 Contributing

Contributing is welcome ❤️

Please use GitHub issues/PRs.

Development tools



has-value NPM version NPM downloads Build Status

Returns true if a value exists, false if empty. Works with deeply nested values using object paths.

Install

Install with npm:

$ npm install has-value --save

Works for:

Usage

Works with nested object paths or a single value:

var hasValue = require('has-value');

hasValue({a: {b: {c: 'foo'}}} 'a.b.c');
//=> true

hasValue('a');
//=> true

hasValue('');
//=> false

hasValue(1);
//=> true

hasValue(0);
//=> false

hasValue(0, true); // pass `true` as the last arg to treat zero as a value
//=> true

hasValue({a: 'a'}});
//=> true

hasValue({}});
//=> false

hasValue(['a']);
//=> true

hasValue([]);
//=> false

hasValue(function(foo) {}); // function length/arity
//=> true

hasValue(function() {});
//=> false

hasValue(true);
hasValue(false);
//=> true

isEmpty

To do the opposite and test for empty values, do:

function isEmpty(o, isZero) {
  return !hasValue.apply(hasValue, arguments);
}

You might also be interested in these projects:

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

Generate readme and API documentation with verb:

$ npm install verb && npm run docs

Or, if verb is installed globally:

$ verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb, v, on March 27, 2016. # is-negated-glob NPM version NPM downloads Build Status

Returns an object with a negated boolean and the ! stripped from negation patterns. Also respects extglobs.

Install

Install with npm:

$ npm install --save is-negated-glob

Usage

var isNegatedGlob = require('is-negated-glob');

console.log(isNegatedGlob('foo'));
// { pattern: 'foo', negated: false }

console.log(isNegatedGlob('!foo'));
// { pattern: 'foo', negated: true }

console.log(isNegatedGlob('!(foo)'));
// extglob patterns are ignored
// { pattern: '!(foo)', negated: false }

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.1.30, on September 08, 2016. # concordance

Compare, format, diff and serialize any JavaScript value. Built for Node.js 10 and above.

Behavior

Concordance recursively describes JavaScript values, whether they’re booleans or complex object structures. It recurses through all enumerable properties, list items (e.g. arrays) and iterator entries.

The same algorithm is used when comparing, formatting or diffing values. This means Concordance’s behavior is consistent, no matter how you use it.

Comparison details

Formatting details

Concordance strives to format every aspect of a value that is used for comparisons. Formatting is optimized for human legibility.

Strings enjoy special formatting:

Similarly, line breaks in symbol descriptions are escaped.

Diffing details

Concordance tries to minimize diff lines. This is difficult with object values, which may have similar properties but a different constructor. Multi-line strings are compared line-by-line.

Serialization details

Concordance can serialize any value for later use. Deserialized values can be compared to each other or to regular JavaScript values. The deserialized value should be passed as the actual value to the comparison and diffing methods. Certain value comparisons behave differently when the actual value is deserialized:



Glob To Regular Expression

Build Status

Turn a *-wildcard style glob ("*.min.js") into a regular expression (/^.*\.min\.js$/)!

To match bash-like globs, eg. ? for any single-character match, [a-z] for character ranges, and {*.html, *.js} for multiple alternatives, call with { extended: true }.

To obey globstars ** rules set option {globstar: true}. NOTE: This changes the behavior of * when globstar is true as shown below: When {globstar: true}: /foo/** will match any string that starts with /foo/ like /foo/index.htm, /foo/bar/baz.txt, etc. Also, /foo/**/*.txt will match any string that starts with /foo/ and ends with .txt like /foo/bar.txt, /foo/bar/baz.txt, etc. Whereas /foo/* (single *, not a globstar) will match strings that start with /foo/ like /foo/index.htm, /foo/baz.txt but will not match strings that contain a / to the right like /foo/bar/baz.txt, /foo/bar/baz/qux.dat, etc.

Set flags on the resulting RegExp object by adding the flags property to the option object, eg { flags: "i" } for ignoring case.

Install

npm install glob-to-regexp

Usage

var globToRegExp = require('glob-to-regexp');
var re = globToRegExp("p*uck");
re.test("pot luck"); // true
re.test("pluck"); // true
re.test("puck"); // true

re = globToRegExp("*.min.js");
re.test("http://example.com/jquery.min.js"); // true
re.test("http://example.com/jquery.min.js.map"); // false

re = globToRegExp("*/www/*.js");
re.test("http://example.com/www/app.js"); // true
re.test("http://example.com/www/lib/factory-proxy-model-observer.js"); // true

// Extended globs
re = globToRegExp("*/www/{*.js,*.html}", { extended: true });
re.test("http://example.com/www/app.js"); // true
re.test("http://example.com/www/index.html"); // true

All rights reserved.



balanced-match

build status downloads

testling badge

Example

Get the first matching pair of braces:

var balanced = require('balanced-match');

console.log(balanced('{', '}', 'pre{in{nested}}post'));
console.log(balanced('{', '}', 'pre{first}between{second}post'));
console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre  {   in{nest}   }  post'));

The matches are:

$ node example.js
{ start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' }
{ start: 3,
  end: 9,
  pre: 'pre',
  body: 'first',
  post: 'between{second}post' }
{ start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' }

API

var m = balanced(a, b, str)

For the first non-nested matching pair of a and b in str, return an object with those keys:

If there’s no match, undefined will be returned.

If the str contains more a than b / there are unmatched pairs, the first match that was closed will be used. For example, {{a} will match ['{', 'a', ''] and {a}} will match ['', 'a', '}'].

var r = balanced.range(a, b, str)

For the first non-nested matching pair of a and b in str, return an array with indexes: [ <a index>, <b index> ].

If there’s no match, undefined will be returned.

If the str contains more a than b / there are unmatched pairs, the first match that was closed will be used. For example, {{a} will match [ 1, 3 ] and {a}} will match [0, 2].

Installation

With npm do:

npm install balanced-match


is-data-descriptor NPM version Build Status

Returns true if a value has the characteristics of a valid JavaScript data descriptor.

Install

Install with npm:

$ npm i is-data-descriptor --save

Usage

var isDataDesc = require('is-data-descriptor');

Examples

true when the descriptor has valid properties with valid values.

// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> true

false when not an object

isDataDesc('a')
//=> false
isDataDesc(null)
//=> false
isDataDesc([])
//=> false

false when the object has invalid properties

isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> false

false when a value is not the correct type

isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> false

Valid properties

The only valid data descriptor properties are the following:

To be a valid data descriptor, either value or writable must be defined.

Invalid properties

A descriptor may have additional invalid properties (an error will not be thrown).

var foo = {};

Object.defineProperty(foo, 'bar', {
  enumerable: true,
  whatever: 'blah', // invalid, but doesn't cause an error
  get: function() {
    return 'baz';
  }
});

console.log(foo.bar);
//=> 'baz'

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb on December 28, 2015. # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support.

Build Status npm Coverage Status

Install

npm install fast-deep-equal

Features

ES6 equal (require('fast-deep-equal/es6')) also supports: - Maps - Sets - Typed arrays

Usage

var equal = require('fast-deep-equal');
console.log(equal({foo: 'bar'}, {foo: 'bar'})); // true

To support ES6 Maps, Sets and Typed arrays equality use:

var equal = require('fast-deep-equal/es6');
console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // true

To use with React (avoiding the traversal of React elements’ _owner property that contains circular references and is not needed when comparing the elements - borrowed from react-fast-compare):

var equal = require('fast-deep-equal/react');
var equal = require('fast-deep-equal/es6/react');

Performance benchmark

Node.js v12.6.0:

fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled)
fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled)
fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled)
nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled)
shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled)
underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled)
lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled)
deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled)
deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled)
ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled)
util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled)
assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled)

The fastest is fast-deep-equal

To run benchmark (requires node.js 6+):

npm run benchmark

Please note: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application.

Enterprise support

Security contact

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues.

@version    1.4.0
@date       2015-10-26
@stability  3 - Stable


Natural Compare – Build Coverage

Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a “natural ordering”.

Standard sorting:   Natural order sorting:
    img1.png            img1.png
    img10.png           img2.png
    img12.png           img10.png
    img2.png            img12.png

String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function.

Installation

<script src=min.natural-compare.js></script>
require("natural-compare-lite")

Usage

// Simple case sensitive example
var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"];
a.sort(String.naturalCompare);
// ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"]

// Use wrapper function for case insensitivity
a.sort(function(a, b){
  return String.naturalCompare(a.toLowerCase(), b.toLowerCase());
})

// In most cases we want to sort an array of objects
var a = [ {"street":"350 5th Ave", "room":"A-1021"}
        , {"street":"350 5th Ave", "room":"A-21046-b"} ];

// sort by street, then by room
a.sort(function(a, b){
  return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room);
})

// When text transformation is needed (eg toLowerCase()),
// it is best for performance to keep
// transformed key in that object.
// There are no need to do text transformation
// on each comparision when sorting.
var a = [ {"make":"Audi", "model":"A6"}
        , {"make":"Kia",  "model":"Rio"} ];

// sort by make, then by model
a.map(function(car){
  car.sort_key = (car.make + " " + car.model).toLowerCase();
})
a.sort(function(a, b){
  return String.naturalCompare(a.sort_key, b.sort_key);
})

Custom alphabet

It is possible to configure a custom alphabet to achieve a desired order.

// Estonian alphabet
String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy"
["t", "z", "x", "õ"].sort(String.naturalCompare)
// ["z", "t", "õ", "x"]

// Russian alphabet
String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя"
["Ё", "А", "Б"].sort(String.naturalCompare)
// ["А", "Б", "Ё"]

Licence



fresh

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

HTTP response freshness testing

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

npm install fresh

API

var fresh = require('fresh')

fresh(reqHeaders, resHeaders)

Check freshness of the response using request and response headers.

When the response is still “fresh” in the client’s cache true is returned, otherwise false is returned to indicate that the client cache is now stale and the full response should be sent.

When a client sends the Cache-Control: no-cache request header to indicate an end-to-end reload request, this module will return false to make handling these requests transparent.

Known Issues

This module is designed to only follow the HTTP specifications, not to work-around all kinda of client bugs (especially since this module typically does not recieve enough information to understand what the client actually is).

There is a known issue that in certain versions of Safari, Safari will incorrectly make a request that allows this module to validate freshness of the resource even when Safari does not have a representation of the resource in the cache. The module jumanji can be used in an Express application to work-around this issue and also provides links to further reading on this Safari bug.

Example

API usage

var reqHeaders = { 'if-none-match': '"foo"' }
var resHeaders = { 'etag': '"bar"' }
fresh(reqHeaders, resHeaders)
// => false

var reqHeaders = { 'if-none-match': '"foo"' }
var resHeaders = { 'etag': '"foo"' }
fresh(reqHeaders, resHeaders)
// => true

Using with Node.js http server

var fresh = require('fresh')
var http = require('http')

var server = http.createServer(function (req, res) {
  // perform server logic
  // ... including adding ETag / Last-Modified response headers

  if (isFresh(req, res)) {
    // client has a fresh copy of resource
    res.statusCode = 304
    res.end()
    return
  }

  // send the resource
  res.statusCode = 200
  res.end('hello, world!')
})

function isFresh (req, res) {
  return fresh(req.headers, {
    'etag': res.getHeader('ETag'),
    'last-modified': res.getHeader('Last-Modified')
  })
}

server.listen(3000)


json-parse-even-better-errors

json-parse-even-better-errors is a Node.js library for getting nicer errors out of JSON.parse(), including context and position of the parse errors.

It also preserves the newline and indentation styles of the JSON data, by putting them in the object or array in the Symbol.for('indent') and Symbol.for('newline') properties.

Install

npm install --save json-parse-even-better-errors

Table of Contents

Example

const parseJson = require('json-parse-even-better-errors')

parseJson('"foo"') // returns the string 'foo'
parseJson('garbage') // more useful error message
parseJson.noExceptions('garbage') // returns undefined

Features

Indentation

To preserve indentation when the file is saved back to disk, use data[Symbol.for('indent')] as the third argument to JSON.stringify, and if you want to preserve windows \r\n newlines, replace the \n chars in the string with data[Symbol.for('newline')].

For example:

const txt = await readFile('./package.json', 'utf8')
const data = parseJsonEvenBetterErrors(txt)
const indent = Symbol.for('indent')
const newline = Symbol.for('newline')
// .. do some stuff to the data ..
const string = JSON.stringify(data, null, data[indent]) + '\n'
const eolFixed = data[newline] === '\n' ? string
  : string.replace(/\n/g, data[newline])
await writeFile('./package.json', eolFixed)

Indentation is determined by looking at the whitespace between the initial { and [ and the character that follows it. If you have lots of weird inconsistent indentation, then it won’t track that or give you any way to preserve it. Whether this is a bug or a feature is debatable ;)

API

parse(txt, reviver = null, context = 20)

Works just like JSON.parse, but will include a bit more information when an error happens, and attaches a Symbol.for('indent') and Symbol.for('newline') on objects and arrays. This throws a JSONParseError.

parse.noExceptions(txt, reviver = null)

Works just like JSON.parse, but will return undefined rather than throwing an error.

class JSONParseError(er, text, context = 20, caller = null)

Extends the JavaScript SyntaxError class to parse the message and provide better metadata.

Pass in the error thrown by the built-in JSON.parse, and the text being parsed, and it’ll parse out the bits needed to be helpful.

context defaults to 20.

Set a caller function to trim internal implementation details out of the stack trace. When calling parseJson, this is set to the parseJson function. If not set, then the constructor defaults to itself, so the stack trace will point to the spot where you call new JSONParseError.



JavaScript MD5

Contents

Description

JavaScript MD5 implementation.
Compatible with server-side environments like Node.js, module loaders like RequireJS or webpack and all web browsers.

Usage

Client-side

Install the blueimp-md5 package with NPM:

npm install blueimp-md5

Include the (minified) JavaScript MD5 script in your HTML markup:

<script src="js/md5.min.js"></script>

In your application code, calculate the (hex-encoded) MD5 hash of a string by calling the md5 method with the string as argument:

var hash = md5('value') // "2063c1608d6e0baf80249c42e2be5804"

Server-side

The following is an example how to use the JavaScript MD5 module on the server-side with Node.js.

Install the blueimp-md5 package with NPM:

npm install blueimp-md5

Add a file server.js with the following content:

require('http')
  .createServer(function (req, res) {
    // The md5 module exports the md5() function:
    var md5 = require('./md5'),
      // Use the following version if you installed the package with npm:
      // var md5 = require("blueimp-md5"),
      url = require('url'),
      query = url.parse(req.url).query
    res.writeHead(200, { 'Content-Type': 'text/plain' })
    // Calculate and print the MD5 hash of the url query:
    res.end(md5(query))
  })
  .listen(8080, 'localhost')
console.log('Server running at http://localhost:8080/')

Run the application with the following command:

node server.js

Requirements

The JavaScript MD5 script has zero dependencies.

API

Calculate the (hex-encoded) MD5 hash of a given string value:

var hash = md5('value') // "2063c1608d6e0baf80249c42e2be5804"

Calculate the (hex-encoded) HMAC-MD5 hash of a given string value and key:

var hash = md5('value', 'key') // "01433efd5f16327ea4b31144572c67f6"

Calculate the raw MD5 hash of a given string value:

var hash = md5('value', null, true)

Calculate the raw HMAC-MD5 hash of a given string value and key:

var hash = md5('value', 'key', true)

Tests

The JavaScript MD5 project comes with Unit Tests.
There are two different ways to run the tests:

The first one tests the browser integration, the second one the Node.js integration.



fast-levenshtein - Levenshtein algorithm in Javascript

Build Status NPM module NPM downloads Follow on Twitter

An efficient Javascript implementation of the Levenshtein algorithm with locale-specific collator support.

Features

Installation

node.js

Install using npm:

$ npm install fast-levenshtein

Browser

Using bower:

$ bower install fast-levenshtein

If you are not using any module loader system then the API will then be accessible via the window.Levenshtein object.

Examples

Default usage

var levenshtein = require('fast-levenshtein');

var distance = levenshtein.get('back', 'book');   // 2
var distance = levenshtein.get('我愛你', '我叫你');   // 1

Locale-sensitive string comparisons

It supports using Intl.Collator for locale-sensitive string comparisons:

var levenshtein = require('fast-levenshtein');

levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true});
// 1

Building and Testing

To build the code and run the tests:

$ npm install -g grunt-cli
$ npm install
$ npm run build

Performance

Thanks to Titus Wormer for encouraging me to do this.

Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM):

Running suite Implementation comparison [benchmark/speed.js]...
>> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled)
>> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled)
>> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled)
>> natural x 255 ops/sec ±0.76% (88 runs sampled)
>> levenshtein x 180 ops/sec ±3.55% (86 runs sampled)
>> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled)
Benchmark done.
Fastest test is fast-levenshtein at 4.2x faster than levenshtein-component

You can run this benchmark yourself by doing:

$ npm install
$ npm run build
$ npm run benchmark

Contributing

If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes.

See CONTRIBUTING.md for details.



object.pick NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns a filtered copy of an object with only the specified keys, similar to _.pick from lodash / underscore.

You might also be interested in object.omit.

Install

Install with npm:

$ npm install --save object.pick

benchmarks

This is the fastest implementation I tested. Pull requests welcome!

Usage

var pick = require('object.pick');

pick({a: 'a', b: 'b'}, 'a')
//=> {a: 'a'}

pick({a: 'a', b: 'b', c: 'c'}, ['a', 'b'])
//=> {a: 'a', b: 'b'}

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.2.0, on October 27, 2016. # collection-visit NPM version NPM monthly downloads NPM total downloads Linux Build Status

Visit a method over the items in an object, or map visit over the objects in an array.

Install

Install with npm:

$ npm install --save collection-visit

Usage

var visit = require('collection-visit');

var ctx = {
  data: {},
  set: function (key, value) {
    if (typeof key === 'object') {
      visit(ctx, 'set', key);
    } else {
      ctx.data[key] = value;
    }
  }
};

ctx.set('a', 'a');
ctx.set('b', 'b');
ctx.set('c', 'c');
ctx.set({d: {e: 'f'}});

console.log(ctx.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }};

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
13 jonschlinkert
9 doowb

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 09, 2017. # tslib

This is a runtime library for TypeScript that contains all of the TypeScript helper functions.

This library is primarily used by the --importHelpers flag in TypeScript. When using --importHelpers, a module that uses helper functions like __extends and __assign in the following emitted file:

var __assign = (this && this.__assign) || Object.assign || function(t) {
    for (var s, i = 1, n = arguments.length; i < n; i++) {
        s = arguments[i];
        for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p))
            t[p] = s[p];
    }
    return t;
};
exports.x = {};
exports.y = __assign({}, exports.x);

will instead be emitted as something like the following:

var tslib_1 = require("tslib");
exports.x = {};
exports.y = tslib_1.__assign({}, exports.x);

Because this can avoid duplicate declarations of things like __extends, __assign, etc., this means delivering users smaller files on average, as well as less runtime overhead. For optimized bundles with TypeScript, you should absolutely consider using tslib and --importHelpers.



Installing

For the latest stable version, run:

npm

# TypeScript 2.3.3 or later
npm install tslib

# TypeScript 2.3.2 or earlier
npm install tslib@1.6.1

yarn

# TypeScript 2.3.3 or later
yarn add tslib

# TypeScript 2.3.2 or earlier
yarn add tslib@1.6.1

bower

# TypeScript 2.3.3 or later
bower install tslib

# TypeScript 2.3.2 or earlier
bower install tslib@1.6.1

JSPM

# TypeScript 2.3.3 or later
jspm install tslib

# TypeScript 2.3.2 or earlier
jspm install tslib@1.6.1


Usage

Set the importHelpers compiler option on the command line:

tsc --importHelpers file.ts

or in your tsconfig.json:

{
    "compilerOptions": {
        "importHelpers": true
    }
}

For bower and JSPM users

You will need to add a paths mapping for tslib, e.g. For Bower users:

{
    "compilerOptions": {
        "module": "amd",
        "importHelpers": true,
        "baseUrl": "./",
        "paths": {
            "tslib" : ["bower_components/tslib/tslib.d.ts"]
        }
    }
}

For JSPM users:

{
    "compilerOptions": {
        "module": "system",
        "importHelpers": true,
        "baseUrl": "./",
        "paths": {
            "tslib" : ["jspm_packages/npm/tslib@1.[version].0/tslib.d.ts"]
        }
    }
}


Contribute

There are many ways to contribute to TypeScript.



Documentation



for-in NPM version NPM monthly downloads NPM total downloads Linux Build Status

Iterate over the own and inherited enumerable properties of an object, and return an object with properties that evaluate to true from the callback. Exit early by returning false. JavaScript/Node.js

Install

Install with npm:

$ npm install --save for-in

Usage

var forIn = require('for-in');

var obj = {a: 'foo', b: 'bar', c: 'baz'};
var values = [];
var keys = [];

forIn(obj, function (value, key, o) {
  keys.push(key);
  values.push(value);
});

console.log(keys);
//=> ['a', 'b', 'c'];

console.log(values);
//=> ['foo', 'bar', 'baz'];

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
16 jonschlinkert
2 paulirish

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.4.2, on February 28, 2017. # Statuses

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

HTTP status utility for node.

This module provides a list of status codes and messages sourced from a few different projects:

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install statuses

API

var status = require('statuses')

var code = status(Integer || String)

If Integer or String is a valid HTTP code or status message, then the appropriate code will be returned. Otherwise, an error will be thrown.

status(403) // => 403
status('403') // => 403
status('forbidden') // => 403
status('Forbidden') // => 403
status(306) // throws, as it's not supported by node.js

status.STATUS_CODES

Returns an object which maps status codes to status messages, in the same format as the Node.js http module.

status.codes

Returns an array of all the status codes as Integers.

var msg = statuscode

Map of code to status message. undefined for invalid codes.

status[404] // => 'Not Found'

var code = status[msg]

Map of status message to code. msg can either be title-cased or lower-cased. undefined for invalid status messages.

status['not found'] // => 404
status['Not Found'] // => 404

status.redirectcode

Returns true if a status code is a valid redirect status.

status.redirect[200] // => undefined
status.redirect[301] // => true

status.emptycode

Returns true if a status code expects an empty body.

status.empty[200] // => undefined
status.empty[204] // => true
status.empty[304] // => true

status.retrycode

Returns true if you should retry the rest.

status.retry[501] // => undefined
status.retry[503] // => true

Build Status



teeny-request

Like request, but much smaller - and with less options. Uses node-fetch under the hood. Pop it in where you would use request. Improves load and parse time of modules.

const request = require('teeny-request').teenyRequest;

request({uri: 'http://ip.jsontest.com/'}, function (error, response, body) {
  console.log('error:', error); // Print the error if one occurred
  console.log('statusCode:', response && response.statusCode); // Print the response status code if a response was received
  console.log('body:', body); // Print the JSON.
});

For TypeScript, you can use @types/request.

import {teenyRequest as request} from 'teeny-request';
import r as * from 'request'; // Only for type declarations

request({uri: 'http://ip.jsontest.com/'}, (error: any, response: r.Response, body: any) => {
  console.log('error:', error); // Print the error if one occurred
  console.log('statusCode:', response && response.statusCode); // Print the response status code if a response was received
  console.log('body:', body); // Print the JSON.
});

teenyRequest(options, callback)

Options are limited to the following

request({uri:'http://service.com/upload', method:'POST', json: {key:'value'}}, function(err,httpResponse,body){ /* ... */ })

The callback argument gets 3 arguments:

defaults(options)

Set default options for every teenyRequest call.

let defaultRequest = teenyRequest.defaults({timeout: 60000});
      defaultRequest({uri: 'http://ip.jsontest.com/'}, function (error, response, body) {
            assert.ifError(error);
            assert.strictEqual(response.statusCode, 200);
            console.log(body.ip);
            assert.notEqual(body.ip, null);
            
            done();
        });

Proxy environment variables

If environment variables HTTP_PROXY or HTTPS_PROXY are set, they are respected. NO_PROXY is currently not implemented.

Building with Webpack 4+

Since 4.0.0, Webpack uses javascript/esm for .mjs files which handles ESM more strictly compared to javascript/auto. If you get the error Can't import the named export 'PassThroughfrom non EcmaScript module, please add the following to your Webpack config:

{
    test: /\.mjs$/,
    type: 'javascript/auto',
},

Motivation

request has a ton of options and features and is accordingly large. Requiering a module incurs load and parse time. For request, that is around 600ms.

Load time of request measured with require-so-slow
Load time of request measured with require-so-slow

teeny-request doesn’t have any of the bells and whistles that request has, but is so much faster to load. If startup time is an issue and you don’t need much beyong a basic GET and POST, you can use teeny-request.

Thanks

Special thanks to billyjacobson for suggesting the name. Please report all bugs to them. Just kidding. Please open issues.



typedarray-to-buffer travis npm downloads javascript style guide

Convert a typed array to a Buffer without a copy.

saucelabs

Say you’re using the ‘buffer’ module on npm, or browserify and you’re working with lots of binary data.

Unfortunately, sometimes the browser or someone else’s API gives you a typed array like Uint8Array to work with and you need to convert it to a Buffer. What do you do?

Of course: Buffer.from(uint8array)

But, alas, every time you do Buffer.from(uint8array) the entire array gets copied. The Buffer constructor does a copy; this is defined by the node docs and the ‘buffer’ module matches the node API exactly.

So, how can we avoid this expensive copy in performance critical applications?

Simply use this module, of course!

If you have an ArrayBuffer, you don’t need this module, because Buffer.from(arrayBuffer) is already efficient.

install

npm install typedarray-to-buffer

usage

To convert a typed array to a Buffer without a copy, do this:

var toBuffer = require('typedarray-to-buffer')

var arr = new Uint8Array([1, 2, 3])
arr = toBuffer(arr)

// arr is a buffer now!

arr.toString()  // '\u0001\u0002\u0003'
arr.readUInt16BE(0)  // 258

how it works

If the browser supports typed arrays, then toBuffer will augment the typed array you pass in with the Buffer methods and return it. See how does Buffer work? for more about how augmentation works.

This module uses the typed array’s underlying ArrayBuffer to back the new Buffer. This respects the “view” on the ArrayBuffer, i.e. byteOffset and byteLength. In other words, if you do toBuffer(new Uint32Array([1, 2, 3])), then the new Buffer will contain [1, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0], not [1, 2, 3]. And it still doesn’t require a copy.

If the browser doesn’t support typed arrays, then toBuffer will create a new Buffer object, copy the data into it, and return it. There’s no simple performance optimization we can do for old browsers. Oh well.

If this module is used in node, then it will just call Buffer.from. This is just for the convenience of modules that work in both node and the browser.



is-extglob NPM version NPM downloads Build Status

Returns true if a string has an extglob.

Install

Install with npm:

$ npm install --save is-extglob

Usage

var isExtglob = require('is-extglob');

True

isExtglob('?(abc)');
isExtglob('@(abc)');
isExtglob('!(abc)');
isExtglob('*(abc)');
isExtglob('+(abc)');

False

Escaped extglobs:

isExtglob('\\?(abc)');
isExtglob('\\@(abc)');
isExtglob('\\!(abc)');
isExtglob('\\*(abc)');
isExtglob('\\+(abc)');

Everything else…

isExtglob('foo.js');
isExtglob('!foo.js');
isExtglob('*.js');
isExtglob('**/abc.js');
isExtglob('abc/*.js');
isExtglob('abc/(aaa|bbb).js');
isExtglob('abc/[a-z].js');
isExtglob('abc/{a,b}.js');
isExtglob('abc/?.js');
isExtglob('abc.js');
isExtglob('abc/def/ghi.js');

History

v2.0

Adds support for escaping. Escaped exglobs no longer return true.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.1.31, on October 12, 2016. # map-cache NPM version NPM downloads Build Status

Basic cache object for storing key-value pairs.

Install

Install with npm:

$ npm install map-cache --save

Usage

var MapCache = require('map-cache');
var mapCache = new MapCache();

API

MapCache

Creates a cache object to store key/value pairs.

Example

var cache = new MapCache();

.set

Adds value to key on the cache.

Params

Example

cache.set('foo', 'bar');

.get

Gets the cached value for key.

Params

Example

cache.get('foo');
//=> 'bar'

.has

Checks if a cached value for key exists.

Params

Example

cache.has('foo');
//=> true

.del

Removes key and its value from the cache.

Params

Example

cache.del('foo');

You might also be interested in these projects:

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

Generate readme and API documentation with verb:

$ npm install verb && npm run docs

Or, if verb is installed globally:

$ verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb, v0.9.0, on May 10, 2016. # fast-json-stable-stringify

Deterministic JSON.stringify() - a faster version of [@substack](https://github.com/substack)’s json-stable-strigify without jsonify.

You can also pass in a custom comparison function.

Build Status Coverage Status



example

var stringify = require('fast-json-stable-stringify');
var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 };
console.log(stringify(obj));

output:

{"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8}


methods

var stringify = require('fast-json-stable-stringify')

var str = stringify(obj, opts)

Return a deterministic stringified string str from the object obj.

options

cmp

If opts is given, you can supply an opts.cmp to have a custom comparison function for object keys. Your function opts.cmp is called with these parameters:

opts.cmp({ key: akey, value: avalue }, { key: bkey, value: bvalue })

For example, to sort on the object key names in reverse order you could write:

var stringify = require('fast-json-stable-stringify');

var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 };
var s = stringify(obj, function (a, b) {
    return a.key < b.key ? 1 : -1;
});
console.log(s);

which results in the output string:

{"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3}

Or if you wanted to sort on the object values in reverse order, you could write:

var stringify = require('fast-json-stable-stringify');

var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 };
var s = stringify(obj, function (a, b) {
    return a.value < b.value ? 1 : -1;
});
console.log(s);

which outputs:

{"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10}

cycles

Pass true in opts.cycles to stringify circular property as __cycle__ - the result will not be a valid JSON string in this case.

TypeError will be thrown in case of circular object without this option.



install

With npm do:

npm install fast-json-stable-stringify


benchmark

To run benchmark (requires Node.js 6+):

node benchmark

Results:

fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled)
json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled)
fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled)
faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled)
The fastest is fast-stable-stringify

Enterprise support

Security contact

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues.





TypeScript Scope Manager

CI NPM Version NPM Downloads

This is a fork of eslint-scope, enhanced to support TypeScript functionality. You can view the original licence for the code here.

This package is consumed automatically by @typescript-eslint/parser. You probably don’t want to use it directly.

Getting Started

You can find our Getting Started docs here

Installation

$ yarn add -D typescript @typescript-eslint/scope-manager
$ npm i --save-dev typescript @typescript-eslint/scope-manager

API

analyze(tree, options)

Analyses a given AST and returns the resulting ScopeManager.

interface AnalyzeOptions {
  /**
   * Known visitor keys.
   */
  childVisitorKeys?: Record<string, string[]> | null;

  /**
   * Which ECMAScript version is considered.
   * Defaults to `2018`.
   */
  ecmaVersion?: EcmaVersion;

  /**
   * Whether the whole script is executed under node.js environment.
   * When enabled, the scope manager adds a function scope immediately following the global scope.
   * Defaults to `false`.
   */
  globalReturn?: boolean;

  /**
   * Implied strict mode (if ecmaVersion >= 5).
   * Defaults to `false`.
   */
  impliedStrict?: boolean;

  /**
   * The identifier that's used for JSX Element creation (after transpilation).
   * This should not be a member expression - just the root identifier (i.e. use "React" instead of "React.createElement").
   * Defaults to `"React"`.
   */
  jsxPragma?: string;

  /**
   * The identifier that's used for JSX fragment elements (after transpilation).
   * If `null`, assumes transpilation will always use a member on `jsxFactory` (i.e. React.Fragment).
   * This should not be a member expression - just the root identifier (i.e. use "h" instead of "h.Fragment").
   * Defaults to `null`.
   */
  jsxFragmentName?: string | null;

  /**
   * The lib used by the project.
   * This automatically defines a type variable for any types provided by the configured TS libs.
   * For more information, see https://www.typescriptlang.org/tsconfig#lib
   *
   * Defaults to the lib for the provided `ecmaVersion`.
   */
  lib?: Lib[];

  /**
   * The source type of the script.
   */
  sourceType?: 'script' | 'module';
}

Example usage:

import { analyze } from '@typescript-eslint/scope-manager';
import { parse } from '@typescript-eslint/typescript-estree';

const code = `const hello: string = 'world';`;
const ast = parse(code, {
  // note that scope-manager requires ranges on the AST
  range: true,
});
const scope = analyze(ast, {
  ecmaVersion: 2020,
  sourceType: 'module',
});

References

Contributing

See the contributing guide here



object-visit NPM version NPM monthly downloads NPM total downloads Linux Build Status

Call a specified method on each value in the given object.

Install

Install with npm:

$ npm install --save object-visit

Usage

var visit = require('object-visit');

var ctx = {
  data: {},
  set: function (key, value) {
    if (typeof key === 'object') {
      visit(ctx, 'set', key);
    } else {
      ctx.data[key] = value;
    }
  }
};

ctx.set('a', 'a');
ctx.set('b', 'b');
ctx.set('c', 'c');
ctx.set({d: {e: 'f'}});

console.log(ctx.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }};

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on May 30, 2017. # use NPM version NPM monthly downloads NPM total downloads Linux Build Status

Easily add plugin support to your node.js application.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save use

A different take on plugin handling! This is not a middleware system, if you need something that handles async middleware, ware is great for that.

Usage

const use = require('use');

See the examples folder for usage examples.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
37 jonschlinkert
7 charlike-old
2 doowb
2 wtgtybhertgeghgtwtg

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 12, 2018. # arr-union NPM version Build Status

Combines a list of arrays, returning a single array with unique values, using strict equality for comparisons.

Install

Install with npm:

$ npm i arr-union --save

Benchmarks

This library is 10-20 times faster and more performant than array-union.

See the benchmarks.

#1: five-arrays
  array-union x 511,121 ops/sec ±0.80% (96 runs sampled)
  arr-union x 5,716,039 ops/sec ±0.86% (93 runs sampled)

#2: ten-arrays
  array-union x 245,196 ops/sec ±0.69% (94 runs sampled)
  arr-union x 1,850,786 ops/sec ±0.84% (97 runs sampled)

#3: two-arrays
  array-union x 563,869 ops/sec ±0.97% (94 runs sampled)
  arr-union x 9,602,852 ops/sec ±0.87% (92 runs sampled)

Usage

var union = require('arr-union');

union(['a'], ['b', 'c'], ['d', 'e', 'f']);
//=> ['a', 'b', 'c', 'd', 'e', 'f']

Returns only unique elements:

union(['a', 'a'], ['b', 'c']);
//=> ['a', 'b', 'c']

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

Generate readme and API documentation with verb:

$ npm i verb && npm run docs

Or, if verb is installed globally:

$ verb

Running tests

Install dev dependencies:

$ npm i -d && npm test

Author

Jon Schlinkert


This file was generated by verb, v0.9.0, on February 23, 2016. glob-parent Build Status Coverage Status ====== Javascript module to extract the non-magic parent path from a glob string.

NPM NPM

Usage

npm install glob-parent --save

Examples

var globParent = require('glob-parent');

globParent('path/to/*.js'); // 'path/to'
globParent('/root/path/to/*.js'); // '/root/path/to'
globParent('/*.js'); // '/'
globParent('*.js'); // '.'
globParent('**/*.js'); // '.'
globParent('path/{to,from}'); // 'path'
globParent('path/!(to|from)'); // 'path'
globParent('path/?(to|from)'); // 'path'
globParent('path/+(to|from)'); // 'path'
globParent('path/*(to|from)'); // 'path'
globParent('path/@(to|from)'); // 'path'
globParent('path/**/*'); // 'path'

// if provided a non-glob path, returns the nearest dir
globParent('path/foo/bar.js'); // 'path/foo'
globParent('path/foo/'); // 'path/foo'
globParent('path/foo'); // 'path' (see issue #3 for details)

Escaping

The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters:

Example

globParent('foo/[bar]/') // 'foo'
globParent('foo/\\[bar]/') // 'foo/[bar]'

Limitations

Braces & Brackets

This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with expand-braces and/or expand-brackets.

Windows

Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators).

This cannot be used in conjunction with escape characters.

// BAD
globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)'

// GOOD
globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)'

If you are using escape characters for a pattern without path parts (i.e. relative to cwd), prefix with ./ to avoid confusing glob-parent.

// BAD
globParent('foo \\[bar]') // 'foo '
globParent('foo \\[bar]*') // 'foo '

// GOOD
globParent('./foo \\[bar]') // 'foo [bar]'
globParent('./foo \\[bar]*') // '.'

Change Log

See release notes page on GitHub

ISC



clone

build status

info badge

offers foolproof deep cloning of objects, arrays, numbers, strings etc. in JavaScript.

Installation

npm install clone

(It also works with browserify, ender or standalone.)

Example

var clone = require('clone');

var a, b;

a = { foo: { bar: 'baz' } };  // initial value of a

b = clone(a);                 // clone a -> b
a.foo.bar = 'foo';            // change a

console.log(a);               // show a
console.log(b);               // show b

This will print:

{ foo: { bar: 'foo' } }
{ foo: { bar: 'baz' } }

clone masters cloning simple objects (even with custom prototype), arrays, Date objects, and RegExp objects. Everything is cloned recursively, so that you can clone dates in arrays in objects, for example.

API

clone(val, circular, depth)

clone.clonePrototype(obj)

Does a prototype clone as described by Oran Looney.

Circular References

var a, b;

a = { hello: 'world' };

a.myself = a;
b = clone(a);

console.log(b);

This will print:

{ hello: "world", myself: [Circular] }

So, b.myself points to b, not a. Neat!

Test

npm test

Caveat

Some special objects like a socket or process.stdout/stderr are known to not be cloneable. If you find other objects that cannot be cloned, please open an issue.

Bugs and Issues

If you encounter any bugs or issues, feel free to open an issue at github or send me an email to . I also always like to hear from you, if you’re using my code.



is-plain-object NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if an object was created by the Object constructor.

Install

Install with npm:

$ npm install --save is-plain-object

Use isobject if you only want to check if the value is an object and not an array or null.

Usage

var isPlainObject = require('is-plain-object');

true when created by the Object constructor.

isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> true

false when not created by the Object constructor.

isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(null);
//=> false
isPlainObject(Object.create(null));
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
17 jonschlinkert
6 stevenvachon
3 onokumus
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 11, 2017. # is-plain-object NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if an object was created by the Object constructor.

Install

Install with npm:

$ npm install --save is-plain-object

Use isobject if you only want to check if the value is an object and not an array or null.

Usage

var isPlainObject = require('is-plain-object');

true when created by the Object constructor.

isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> true

false when not created by the Object constructor.

isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(null);
//=> false
isPlainObject(Object.create(null));
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
17 jonschlinkert
6 stevenvachon
3 onokumus
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 11, 2017. # is-plain-object NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if an object was created by the Object constructor.

Install

Install with npm:

$ npm install --save is-plain-object

Use isobject if you only want to check if the value is an object and not an array or null.

Usage

var isPlainObject = require('is-plain-object');

true when created by the Object constructor.

isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> true

false when not created by the Object constructor.

isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(null);
//=> false
isPlainObject(Object.create(null));
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
17 jonschlinkert
6 stevenvachon
3 onokumus
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 11, 2017. Build Status Dependency Status devDependency Status

The UNIX command rm -rf for node.

Install with npm install rimraf, or just drop rimraf.js somewhere.

API

rimraf(f, [opts], callback)

The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with opts.disableGlob (defaults to false). This might be handy, for instance, if you have filenames that contain globbing wildcard characters.

The callback will be called with an error if there is one. Certain errors are handled for you:

options

rimraf.sync

It can remove stuff synchronously, too. But that’s not so good. Use the async API. It’s better.

CLI

If installed with npm install rimraf -g it can be used as a global command rimraf <path> [<path> ...] which is useful for cross platform support.

mkdirp

If you need to create a directory recursively, check out mkdirp.



is-accessor-descriptor NPM version Build Status

Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.

(TOC generated by verb using markdown-toc)

Install

Install with npm:

$ npm i is-accessor-descriptor --save

Usage

var isAccessor = require('is-accessor-descriptor');

isAccessor({get: function() {}});
//=> true

You may also pass an object and property name to check if the property is an accessor:

isAccessor(foo, 'bar');

Examples

false when not an object

isAccessor('a')
isAccessor(null)
isAccessor([])
//=> false

true when the object has valid properties

and the properties all have the correct JavaScript types:

isAccessor({get: noop, set: noop})
isAccessor({get: noop})
isAccessor({set: noop})
//=> true

false when the object has invalid properties

isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> false

false when an accessor is not a function

isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> false

false when a value is not the correct type

isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> false

Running tests

Install dev dependencies:

$ npm i -d && npm test

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Author

Jon Schlinkert


This file was generated by verb on December 28, 2015. # node-url

Build Status

This module has utilities for URL resolution and parsing meant to have feature parity with node.js core url module.

var url = require('url');

api

Parsed URL objects have some or all of the following fields, depending on whether or not they exist in the URL string. Any parts that are not in the URL string will not be in the parsed object. Examples are shown for the URL

'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'

The following methods are provided by the URL module:

url.parse(urlStr, [parseQueryString], [slashesDenoteHost])

Take a URL string, and return an object.

Pass true as the second argument to also parse the query string using the querystring module. Defaults to false.

Pass true as the third argument to treat //foo/bar as { host: 'foo', pathname: '/bar' } rather than { pathname: '//foo/bar' }. Defaults to false.

url.format(urlObj)

Take a parsed URL object, and return a formatted URL string.

url.resolve(from, to)

Take a base URL, and a href URL, and resolve them as a browser would for an anchor tag. Examples:

url.resolve('/one/two/three', 'four')         // '/one/two/four'
url.resolve('http://example.com/', '/one')    // 'http://example.com/one'
url.resolve('http://example.com/one', '/two') // 'http://example.com/two'


reusify

npm version Build Status Coverage Status

Reuse your objects and functions for maximum speed. This technique will make any function run ~10% faster. You call your functions a lot, and it adds up quickly in hot code paths.

$ node benchmarks/createNoCodeFunction.js
Total time 53133
Total iterations 100000000
Iteration/s 1882069.5236482036

$ node benchmarks/reuseNoCodeFunction.js
Total time 50617
Total iterations 100000000
Iteration/s 1975620.838848608

The above benchmark uses fibonacci to simulate a real high-cpu load. The actual numbers might differ for your use case, but the difference should not.

The benchmark was taken using Node v6.10.0.

This library was extracted from fastparallel.

Example

var reusify = require('reusify')
var fib = require('reusify/benchmarks/fib')
var instance = reusify(MyObject)

// get an object from the cache,
// or creates a new one when cache is empty
var obj = instance.get()

// set the state
obj.num = 100
obj.func()

// reset the state.
// if the state contains any external object
// do not use delete operator (it is slow)
// prefer set them to null
obj.num = 0

// store an object in the cache
instance.release(obj)

function MyObject () {
  // you need to define this property
  // so V8 can compile MyObject into an
  // hidden class
  this.next = null
  this.num = 0

  var that = this

  // this function is never reallocated,
  // so it can be optimized by V8
  this.func = function () {
    if (null) {
      // do nothing
    } else {
      // calculates fibonacci
      fib(that.num)
    }
  }
}

The above example was intended for synchronous code, let’s see async:

var reusify = require('reusify')
var instance = reusify(MyObject)

for (var i = 0; i < 100; i++) {
  getData(i, console.log)
}

function getData (value, cb) {
  var obj = instance.get()

  obj.value = value
  obj.cb = cb
  obj.run()
}

function MyObject () {
  this.next = null
  this.value = null

  var that = this

  this.run = function () {
    asyncOperation(that.value, that.handle)
  }

  this.handle = function (err, result) {
    that.cb(err, result)
    that.value = null
    that.cb = null
    instance.release(that)
  }
}

Also note how in the above examples, the code, that consumes an istance of MyObject, reset the state to initial condition, just before storing it in the cache. That’s needed so that every subsequent request for an instance from the cache, could get a clean instance.

Why

It is faster because V8 doesn’t have to collect all the functions you create. On a short-lived benchmark, it is as fast as creating the nested function, but on a longer time frame it creates less pressure on the garbage collector.

Other examples

If you want to see some complex example, checkout middie and steed.

Acknowledgements

Thanks to Trevor Norris for getting me down the rabbit hole of performance, and thanks to Mathias Buss for suggesting me to share this trick.



mime-types

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

The ultimate javascript content-type utility.

Similar to the mime@1.x module, except:

Otherwise, the API is compatible with mime 1.x.

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install mime-types

Adding Types

All mime types are based on mime-db, so open a PR there if you’d like to add mime types.

API

var mime = require('mime-types')

All functions return false if input is invalid or not found.

mime.lookup(path)

Lookup the content-type associated with a file.

mime.lookup('json') // 'application/json'
mime.lookup('.md') // 'text/markdown'
mime.lookup('file.html') // 'text/html'
mime.lookup('folder/file.js') // 'application/javascript'
mime.lookup('folder/.htaccess') // false

mime.lookup('cats') // false

mime.contentType(type)

Create a full content-type header given a content-type or extension. When given an extension, mime.lookup is used to get the matching content-type, otherwise the given content-type is used. Then if the content-type does not already have a charset parameter, mime.charset is used to get the default charset and add to the returned content-type.

mime.contentType('markdown') // 'text/x-markdown; charset=utf-8'
mime.contentType('file.json') // 'application/json; charset=utf-8'
mime.contentType('text/html') // 'text/html; charset=utf-8'
mime.contentType('text/html; charset=iso-8859-1') // 'text/html; charset=iso-8859-1'

// from a full path
mime.contentType(path.extname('/path/to/file.json')) // 'application/json; charset=utf-8'

mime.extension(type)

Get the default extension for a content-type.

mime.extension('application/octet-stream') // 'bin'

mime.charset(type)

Lookup the implied default charset of a content-type.

mime.charset('text/markdown') // 'UTF-8'

var type = mime.typesextension

A map of content-types by extension.

[extensions…] = mime.extensionstype

A map of extensions by content-type.

#object.assign Version Badge

npm badge

browser support

An Object.assign shim. Invoke its “shim” method to shim Object.assign if it is unavailable.

This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec. In an ES6 environment, it will also work properly with Symbols.

Takes a minimum of 2 arguments: target and source. Takes a variable sized list of source arguments - at least 1, as many as you want. Throws a TypeError if the target argument is null or undefined.

Most common usage:

var assign = require('object.assign').getPolyfill(); // returns native method if compliant
    /* or */
var assign = require('object.assign/polyfill')(); // returns native method if compliant

Example

var assert = require('assert');

// Multiple sources!
var target = { a: true };
var source1 = { b: true };
var source2 = { c: true };
var sourceN = { n: true };

var expected = {
    a: true,
    b: true,
    c: true,
    n: true
};

assign(target, source1, source2, sourceN);
assert.deepEqual(target, expected); // AWESOME!
var target = {
    a: true,
    b: true,
    c: true
};
var source1 = {
    c: false,
    d: false
};
var sourceN = {
    e: false
};

var assigned = assign(target, source1, sourceN);
assert.equal(target, assigned); // returns the target object
assert.deepEqual(assigned, {
    a: true,
    b: true,
    c: false,
    d: false,
    e: false
});
/* when Object.assign is not present */
delete Object.assign;
var shimmedAssign = require('object.assign').shim();
    /* or */
var shimmedAssign = require('object.assign/shim')();

assert.equal(shimmedAssign, assign);

var target = {
    a: true,
    b: true,
    c: true
};
var source = {
    c: false,
    d: false,
    e: false
};

var assigned = assign(target, source);
assert.deepEqual(Object.assign(target, source), assign(target, source));
/* when Object.assign is present */
var shimmedAssign = require('object.assign').shim();
assert.equal(shimmedAssign, Object.assign);

var target = {
    a: true,
    b: true,
    c: true
};
var source = {
    c: false,
    d: false,
    e: false
};

assert.deepEqual(Object.assign(target, source), assign(target, source));

Tests

Simply clone the repo, npm install, and run npm test



union-value NPM version NPM monthly downloads NPM total downloads Linux Build Status

Install

Install with npm:

$ npm install --save union-value

Usage

var union = require('union-value');

var obj = {};

union(obj, 'a.b.c', ['one', 'two']);
union(obj, 'a.b.c', ['three']);

console.log(obj);
//=> {a: {b: {c: [ 'one', 'two', 'three' ] }}}

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.4.2, on February 25, 2017. # safe-regex

Detect potentially catastrophic exponential-time regular expressions by limiting the star height to 1.

WARNING: This module has both false positives and false negatives. Use vuln-regex-detector for improved accuracy.

Build Status

Example

Suppose you have a script named safe.js:

var safe = require('safe-regex');
var regex = process.argv.slice(2).join(' ');
console.log(safe(regex));

This is its behavior:

$ node safe.js '(x+x+)+y'
false
$ node safe.js '(beep|boop)*'
true
$ node safe.js '(a+){10}'
false
$ node safe.js '\blocation\s*:[^:\n]+\b(Oakland|San Francisco)\b'
true

Methods

const safe = require('safe-regex')

const ok = safe(re, opts={})

Return a boolean ok whether or not the regex re is safe and not possibly catastrophic.

re can be a RegExp object or just a string.

If the re is a string and is an invalid regex, returns false.

Install

With npm do:

npm install safe-regex

Resources

What should I do if my project has a super-linear regex?

  1. Confirm that it is reachable by untrusted input.
  2. If it is, you can consider whether you can prevent worst-case behavior by trimming the input, revising the regex, or replacing the regex with another algorithm like string functions. For examples, see Table 5 in this article.
  3. If none of those solutions looks feasible, you might also consider changing regex engines. The RE2 bindings might work, though test carefully to confirm there are no semantic portability problems.

Further reading

The following documents may be edifying:

Project policies

Versioning

This project follows Semantic Versioning 2.0 (semver).

Here are the project-specific meanings of MAJOR, MINOR, and PATCH updates:

Define a non-enumerable property on an object.

Install

Install with npm:

$ npm install --save define-property

Install with yarn:

$ yarn add define-property

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017. # define-property NPM version NPM monthly downloads NPM total downloads Linux Build Status

Define a non-enumerable property on an object.

Install

Install with npm:

$ npm install --save define-property

Install with yarn:

$ yarn add define-property

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017. # define-property NPM version NPM monthly downloads NPM total downloads Linux Build Status

Define a non-enumerable property on an object.

Install

Install with npm:

$ npm install --save define-property

Install with yarn:

$ yarn add define-property

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

get/set

define(obj, 'foo', {
  get: function() {},
  set: function() {}
});

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017. # assert

Build Status

This module is used for writing unit tests for your applications, you can access it with require('assert').

It aims to be fully compatibe with the node.js assert module, same API and same behavior, just adding support for web browsers. The API and code may contain traces of the CommonJS Unit Testing 1.0 spec which they were based on, but both have evolved significantly since then.

A strict and a legacy mode exist, while it is recommended to only use strict mode.

Strict mode

When using the strict mode, any assert function will use the equality used in the strict function mode. So assert.deepEqual() will, for example, work the same as assert.deepStrictEqual().

It can be accessed using:

const assert = require('assert').strict;

Legacy mode

Deprecated: Use strict mode instead.

When accessing assert directly instead of using the strict property, the Abstract Equality Comparison will be used for any function without a “strict” in its name (e.g. assert.deepEqual()).

It can be accessed using:

const assert = require('assert');

It is recommended to use the strict mode instead as the Abstract Equality Comparison can often have surprising results. Especially in case of assert.deepEqual() as the used comparison rules there are very lax.

E.g.

// WARNING: This does not throw an AssertionError!
assert.deepEqual(/a/gi, new Date());

assert.fail(actual, expected, message, operator)

Throws an exception that displays the values for actual and expected separated by the provided operator.

assert(value, message), assert.ok(value, message)

Tests if value is truthy, it is equivalent to assert.equal(true, !!value, message);

assert.equal(actual, expected, message)

Tests shallow, coercive equality with the equal comparison operator ( == ).

assert.notEqual(actual, expected, message)

Tests shallow, coercive non-equality with the not equal comparison operator ( != ).

assert.deepEqual(actual, expected, message)

Tests for deep equality.

assert.deepStrictEqual(actual, expected, message)

Tests for deep equality, as determined by the strict equality operator ( === )

assert.notDeepEqual(actual, expected, message)

Tests for any deep inequality.

assert.strictEqual(actual, expected, message)

Tests strict equality, as determined by the strict equality operator ( === )

assert.notStrictEqual(actual, expected, message)

Tests strict non-equality, as determined by the strict not equal operator ( !== )

assert.throws(block, error, message)

Expects block to throw an error. error can be constructor, regexp or validation function.

Validate instanceof using constructor:

assert.throws(function() { throw new Error("Wrong value"); }, Error);

Validate error message using RegExp:

assert.throws(function() { throw new Error("Wrong value"); }, /value/);

Custom error validation:

assert.throws(function() {
    throw new Error("Wrong value");
}, function(err) {
    if ( (err instanceof Error) && /value/.test(err) ) {
        return true;
    }
}, "unexpected error");

assert.doesNotThrow(block, message)

Expects block not to throw an error, see assert.throws for details.

assert.ifError(value)

Tests if value is not a false value, throws if it is a true value. Useful when testing the first argument, error in callbacks.



merge2

Merge multiple streams into one stream in sequence or parallel.

NPM version Build Status Downloads

Install

Install with npm

npm install merge2

Usage

const gulp = require('gulp')
const merge2 = require('merge2')
const concat = require('gulp-concat')
const minifyHtml = require('gulp-minify-html')
const ngtemplate = require('gulp-ngtemplate')

gulp.task('app-js', function () {
  return merge2(
      gulp.src('static/src/tpl/*.html')
        .pipe(minifyHtml({empty: true}))
        .pipe(ngtemplate({
          module: 'genTemplates',
          standalone: true
        })
      ), gulp.src([
        'static/src/js/app.js',
        'static/src/js/locale_zh-cn.js',
        'static/src/js/router.js',
        'static/src/js/tools.js',
        'static/src/js/services.js',
        'static/src/js/filters.js',
        'static/src/js/directives.js',
        'static/src/js/controllers.js'
      ])
    )
    .pipe(concat('app.js'))
    .pipe(gulp.dest('static/dist/js/'))
})
const stream = merge2([stream1, stream2], stream3, {end: false})
//...
stream.add(stream4, stream5)
//..
stream.end()
// equal to merge2([stream1, stream2], stream3)
const stream = merge2()
stream.add([stream1, stream2])
stream.add(stream3)
// merge order:
//   1. merge `stream1`;
//   2. merge `stream2` and `stream3` in parallel after `stream1` merged;
//   3. merge 'stream4' after `stream2` and `stream3` merged;
const stream = merge2(stream1, [stream2, stream3], stream4)

// merge order:
//   1. merge `stream5` and `stream6` in parallel after `stream4` merged;
//   2. merge 'stream7' after `stream5` and `stream6` merged;
stream.add([stream5, stream6], stream7)
// nest merge
// equal to merge2(stream1, stream2, stream6, stream3, [stream4, stream5]);
const streamA = merge2(stream1, stream2)
const streamB = merge2(stream3, [stream4, stream5])
const stream = merge2(streamA, streamB)
streamA.add(stream6)

API

const merge2 = require('merge2')

merge2()

merge2(options)

merge2(stream1, stream2, …, streamN)

merge2(stream1, stream2, …, streamN, options)

merge2(stream1, [stream2, stream3, …], streamN, options)

return a duplex stream (mergedStream). streams in array will be merged in parallel.

mergedStream.add(stream)

mergedStream.add(stream1, [stream2, stream3, …], …)

return the mergedStream.

mergedStream.on(‘queueDrain’, function() {})

It will emit ‘queueDrain’ when all streams merged. If you set end === false in options, this event give you a notice that should add more streams to merge or end the mergedStream.

stream

option Type: Readable or Duplex or Transform stream.

options

option Type: Object.

objectMode and other options(highWaterMark, defaultEncoding …) is same as Node.js Stream.



mixin-deep NPM version NPM monthly downloads NPM total downloads Linux Build Status

Deeply mix the properties of objects into the first object. Like merge-deep, but doesn’t clone.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save mixin-deep

Usage

var mixinDeep = require('mixin-deep');

mixinDeep({a: {aa: 'aa'}}, {a: {bb: 'bb'}}, {a: {cc: 'cc'}});
//=> { a: { aa: 'aa', bb: 'bb', cc: 'cc' } }

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on December 09, 2017. # arr-flatten NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Recursively flatten an array or arrays.

Install

Install with npm:

$ npm install --save arr-flatten

Install

Install with bower

$ bower install arr-flatten --save

Usage

var flatten = require('arr-flatten');

flatten(['a', ['b', ['c']], 'd', ['e']]);
//=> ['a', 'b', 'c', 'd', 'e']

Why another flatten utility?

I wanted the fastest implementation I could find, with implementation choices that should work for 95% of use cases, but no cruft to cover the other 5%.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
20 jonschlinkert
1 lukeed

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 05, 2017. # is-relative NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if the path appears to be relative.

Install

Install with npm:

$ npm install --save is-relative

Usage

var isRelative = require('is-relative');
console.log(isRelative('README.md'));
//=> true

console.log(isRelative('/User/dev/foo/README.md'));
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
13 jonschlinkert
3 shinnn

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


In a nutshell:

var parse = require('spdx-expression-parse')
var assert = require('assert')

assert.deepEqual(
  parse('BSD-2-Clause'),
)

assert.throws(function () {
  // Should be `Apache-2.0`.
  parse('Apache 2')
})

assert.deepEqual(
  // - LGPL 2.1
  {
    conjunction: 'or',
    right: {
      conjunction: 'and',
    }
  }
)

The bulk of the SPDX standard describes syntax and semantics of XML metadata files. This package implements two lightweight, plain-text components of that larger standard:



encodeurl

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Encode a URL to a percent-encoded form, excluding already-encoded sequences

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install encodeurl

API

var encodeUrl = require('encodeurl')

encodeUrl(url)

Encode a URL to a percent-encoded form, excluding already-encoded sequences.

This function will take an already-encoded URL and encode all the non-URL code points (as UTF-8 byte sequences). This function will not encode the “%” character unless it is not part of a valid sequence (%20 will be left as-is, but %foo will be encoded as %25foo).

This encode is meant to be “safe” and does not throw errors. It will try as hard as it can to properly encode the given URL, including replacing any raw, unpaired surrogate pairs with the Unicode replacement character prior to encoding.

This function is similar to the intrinsic function encodeURI, except it will not encode the % character if that is part of a valid sequence, will not encode [ and ] (for IPv6 hostnames) and will replace raw, unpaired surrogate pairs with the Unicode replacement character (instead of throwing).

Examples

Encode a URL containing user-controled data

var encodeUrl = require('encodeurl')
var escapeHtml = require('escape-html')

http.createServer(function onRequest (req, res) {
  // get encoded form of inbound url
  var url = encodeUrl(req.url)

  // create html message
  var body = '<p>Location ' + escapeHtml(url) + ' not found</p>'

  // send a 404
  res.statusCode = 404
  res.setHeader('Content-Type', 'text/html; charset=UTF-8')
  res.setHeader('Content-Length', String(Buffer.byteLength(body, 'utf-8')))
  res.end(body, 'utf-8')
})

Encode a URL for use in a header field

var encodeUrl = require('encodeurl')
var escapeHtml = require('escape-html')
var url = require('url')

http.createServer(function onRequest (req, res) {
  // parse inbound url
  var href = url.parse(req)

  // set new host for redirect
  href.host = 'localhost'
  href.protocol = 'https:'
  href.slashes = true

  // create location header
  var location = encodeUrl(url.format(href))

  // create html message
  var body = '<p>Redirecting to new site: ' + escapeHtml(location) + '</p>'

  // send a 301
  res.statusCode = 301
  res.setHeader('Content-Type', 'text/html; charset=UTF-8')
  res.setHeader('Content-Length', String(Buffer.byteLength(body, 'utf-8')))
  res.setHeader('Location', location)
  res.end(body, 'utf-8')
})

Testing

$ npm test
$ npm run lint

References



isobject NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if the value is an object and not an array or null.

Install

Install with npm:

$ npm install --save isobject

Install with yarn:

$ yarn add isobject

Use is-plain-object if you want only objects that are created by the Object constructor.

Install

Install with npm:

$ npm install isobject

Install with bower

$ bower install isobject

Usage

var isObject = require('isobject');

True

All of the following return true:

isObject({});
isObject(Object.create({}));
isObject(Object.create(Object.prototype));
isObject(Object.create(null));
isObject({});
isObject(new Foo);
isObject(/foo/);

False

All of the following return false:

isObject();
isObject(function () {});
isObject(1);
isObject([]);
isObject(undefined);
isObject(null);

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
29 jonschlinkert
4 doowb
1 magnudae
1 LeSuisse
1 tmcw

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on June 30, 2017. # isobject NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if the value is an object and not an array or null.

Install

Install with npm:

$ npm install --save isobject

Install with yarn:

$ yarn add isobject

Use is-plain-object if you want only objects that are created by the Object constructor.

Install

Install with npm:

$ npm install isobject

Install with bower

$ bower install isobject

Usage

var isObject = require('isobject');

True

All of the following return true:

isObject({});
isObject(Object.create({}));
isObject(Object.create(Object.prototype));
isObject(Object.create(null));
isObject({});
isObject(new Foo);
isObject(/foo/);

False

All of the following return false:

isObject();
isObject(function () {});
isObject(1);
isObject([]);
isObject(undefined);
isObject(null);

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
29 jonschlinkert
4 doowb
1 magnudae
1 LeSuisse
1 tmcw

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on June 30, 2017.

build:? npm npm npm

consolidates all data structures of @datastructures-js into a single repository. Data structures are distributed into their own repositories for easier maintenance and usability so that they can be installed and imported individually in the code.



Table of Contents

install

npm install --save datastructures-js

API

require

// import your required classes
const {
  Queue,
  Stack,
  Set: EnhancedSet, // renamed to avoid conflict with es6 Set
  LinkedList,
  DoublyLinkedList,
  MinHeap,
  MaxHeap,
  MinPriorityQueue,
  MaxPriorityQueue,
  Graph,
  DirectedGraph,
  BinarySearchTree,
  AvlTree,
  Trie
} = require('datastructures-js');

import

// import your required classes
import {
  Queue,
  PriorityQueue,
  Stack,
  Set as EnhancedSet, // renamed to avoid conflict with es6 Set
  LinkedList,
  DoublyLinkedList,
  MinHeap,
  MaxHeap,
  MinPriorityQueue,
  MaxPriorityQueue,
  Graph,
  DirectedGraph,
  BinarySearchTree,
  AvlTree,
  Trie
} from 'datastructures-js';

extend

There are sometimes domain-specific use cases for data structures that require either a tweak or additional functionality. Data structures here are implemented as a base general purpose classes in ES6. You can always use any of these classes to override or extend the functionality in your own code.

Example

const { Graph } = require('datastructures-js'); // OR require('@datastructures-js/graph')

class BusStationsGraph extends Graph {
  findShortestPath(srcStationId, destStationId) {
    // benefit from Graph to implement your own code 
  }
}

Data Structures

Queue

https://github.com/datastructures-js/queue

Stack

https://github.com/datastructures-js/stack

Set

https://github.com/datastructures-js/set

Linked List

https://github.com/datastructures-js/linked-list

Doubly Linked List

https://github.com/datastructures-js/linked-list

Min Heap

https://github.com/datastructures-js/heap

Max Heap

https://github.com/datastructures-js/heap

Min Priority Queue

https://github.com/datastructures-js/priority-queue

Max Priority Queue

https://github.com/datastructures-js/priority-queue

Graph

https://github.com/datastructures-js/graph

Directed Graph

https://github.com/datastructures-js/graph

Binary Search Tree

https://github.com/datastructures-js/binary-search-tree

AVL Tree

https://github.com/datastructures-js/binary-search-tree

Trie

https://github.com/datastructures-js/trie

Build

grunt build


finalhandler

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Node.js function to invoke as the final step to respond to HTTP request.

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install finalhandler

API

var finalhandler = require('finalhandler')

finalhandler(req, res, options)

Returns function to be invoked as the final step for the given req and res. This function is to be invoked as fn(err). If err is falsy, the handler will write out a 404 response to the res. If it is truthy, an error response will be written out to the res.

When an error is written, the following information is added to the response:

The final handler will also unpipe anything from req when it is invoked.

options.env

By default, the environment is determined by NODE_ENV variable, but it can be overridden by this option.

options.onerror

Provide a function to be called with the err when it exists. Can be used for writing errors to a central location without excessive function generation. Called as onerror(err, req, res).

Examples

always 404

var finalhandler = require('finalhandler')
var http = require('http')

var server = http.createServer(function (req, res) {
  var done = finalhandler(req, res)
  done()
})

server.listen(3000)

perform simple action

var finalhandler = require('finalhandler')
var fs = require('fs')
var http = require('http')

var server = http.createServer(function (req, res) {
  var done = finalhandler(req, res)

  fs.readFile('index.html', function (err, buf) {
    if (err) return done(err)
    res.setHeader('Content-Type', 'text/html')
    res.end(buf)
  })
})

server.listen(3000)

use with middleware-style functions

var finalhandler = require('finalhandler')
var http = require('http')
var serveStatic = require('serve-static')

var serve = serveStatic('public')

var server = http.createServer(function (req, res) {
  var done = finalhandler(req, res)
  serve(req, res, done)
})

server.listen(3000)

keep log of all errors

var finalhandler = require('finalhandler')
var fs = require('fs')
var http = require('http')

var server = http.createServer(function (req, res) {
  var done = finalhandler(req, res, { onerror: logerror })

  fs.readFile('index.html', function (err, buf) {
    if (err) return done(err)
    res.setHeader('Content-Type', 'text/html')
    res.end(buf)
  })
})

server.listen(3000)

function logerror (err) {
  console.error(err.stack || err.toString())
}


array-unique NPM version NPM downloads Build Status

Remove duplicate values from an array. Fastest ES5 implementation.

Install

Install with npm:

$ npm install --save array-unique

Usage

var unique = require('array-unique');

var arr = ['a', 'b', 'c', 'c'];
console.log(unique(arr)) //=> ['a', 'b', 'c']
console.log(arr)         //=> ['a', 'b', 'c']

/* The above modifies the input array. To prevent that at a slight performance cost: */
var unique = require("array-unique").immutable;

var arr = ['a', 'b', 'c', 'c'];
console.log(unique(arr)) //=> ['a', 'b', 'c']
console.log(arr)         //=> ['a', 'b', 'c', 'c']

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.1.28, on July 31, 2016. anymatch Build Status Coverage Status ====== Javascript module to match a string against a regular expression, glob, string, or function that takes the string as an argument and returns a truthy or falsy value. The matcher can also be an array of any or all of these. Useful for allowing a very flexible user-defined config to define things like file paths.

Note: This module has Bash-parity, please be aware that Windows-style backslashes are not supported as separators. See https://github.com/micromatch/micromatch#backslashes for more information.

Usage

npm install anymatch

anymatch(matchers, testString, [returnIndex], options)

const anymatch = require('anymatch');

const matchers = [ 'path/to/file.js', 'path/anyjs/**/*.js', /foo.js$/, string => string.includes('bar') && string.length > 10 ] ;

anymatch(matchers, 'path/to/file.js'); // true
anymatch(matchers, 'path/anyjs/baz.js'); // true
anymatch(matchers, 'path/to/foo.js'); // true
anymatch(matchers, 'path/to/bar.js'); // true
anymatch(matchers, 'bar.js'); // false

// returnIndex = true
anymatch(matchers, 'foo.js', {returnIndex: true}); // 2
anymatch(matchers, 'path/anyjs/foo.js', {returnIndex: true}); // 1

// any picomatc

// using globs to match directories and their children
anymatch('node_modules', 'node_modules'); // true
anymatch('node_modules', 'node_modules/somelib/index.js'); // false
anymatch('node_modules/**', 'node_modules/somelib/index.js'); // true
anymatch('node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // false
anymatch('**/node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // true

const matcher = anymatch(matchers);
['foo.js', 'bar.js'].filter(matcher);  // [ 'foo.js' ]
anymatch master*

anymatch(matchers)

You can also pass in only your matcher(s) to get a curried function that has already been bound to the provided matching criteria. This can be used as an Array#filter callback.

var matcher = anymatch(matchers);

matcher('path/to/file.js'); // true
matcher('path/anyjs/baz.js', true); // 1

['foo.js', 'bar.js'].filter(matcher); // ['foo.js']

Changelog

See release notes page on GitHub

ISC



is-unc-path NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a filepath is a windows UNC file path.

Install

Install with npm:

$ npm install --save is-unc-path

Usage

var isUncPath = require('is-unc-path');

true

Returns true for windows UNC paths:

isUncPath('\\/foo/bar');
isUncPath('\\\\foo/bar');
isUncPath('\\\\foo\\admin$');
isUncPath('\\\\foo\\admin$\\system32');
isUncPath('\\\\foo\\temp');
isUncPath('\\\\/foo/bar');
isUncPath('\\\\\\/foo/bar');

false

Returns false for non-UNC paths:

isUncPath('/foo/bar');
isUncPath('/');
isUncPath('/foo');
isUncPath('/foo/');
isUncPath('c:');
isUncPath('c:.');
isUncPath('c:./');
isUncPath('c:./file');
isUncPath('c:/');
isUncPath('c:/file');

Customization

Use .source to use the regex as a component of another regex:

var myRegex = new RegExp(isUncPath.source + 'foo');

Rules for UNC paths

Release history

v1.0.0 - 2017-07-12

Changes

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 13, 2017. # has-values NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if any values exist, false if empty. Works for booleans, functions, numbers, strings, nulls, objects and arrays.

Install

Install with npm:

$ npm install --save has-values

Usage

var hasValue = require('has-values');

hasValue('a');
//=> true

hasValue('');
//=> false

hasValue(1);
//=> true

hasValue(0);
//=> false

hasValue({a: 'a'}});
//=> true

hasValue({});
hasValue({foo: undefined});
//=> false

hasValue({foo: null});
//=> true

hasValue(['a']);
//=> true

hasValue([]);
hasValue([[], []]);
hasValue([[[]]]);
//=> false

hasValue(['foo']);
hasValue([0]);
//=> true

hasValue(function(foo) {}); 
//=> true

hasValue(function() {});
//=> true

hasValue(true);
//=> true

hasValue(false);
//=> true

isEmpty

To test for empty values, do:

function isEmpty(o, isZero) {
  return !hasValue(o, isZero);
}

Release history

v1.0.0

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on May 19, 2017. # brace-expansion

build status downloads Greenkeeper badge

testling badge

Example

var expand = require('brace-expansion');

expand('file-{a,b,c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']

expand('-v{,,}')
// => ['-v', '-v', '-v']

expand('file{0..2}.jpg')
// => ['file0.jpg', 'file1.jpg', 'file2.jpg']

expand('file-{a..c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']

expand('file{2..0}.jpg')
// => ['file2.jpg', 'file1.jpg', 'file0.jpg']

expand('file{0..4..2}.jpg')
// => ['file0.jpg', 'file2.jpg', 'file4.jpg']

expand('file-{a..e..2}.jpg')
// => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg']

expand('file{00..10..5}.jpg')
// => ['file00.jpg', 'file05.jpg', 'file10.jpg']

expand('{{A..C},{a..c}}')
// => ['A', 'B', 'C', 'a', 'b', 'c']

expand('ppp{,config,oe{,conf}}')
// => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf']

API

var expand = require('brace-expansion');

var expanded = expand(str)

Return an array of all possible and valid expansions of str. If none are found, [str] is returned.

Valid expansions are:

/^(.*,)+(.+)?$/
// {a,b,...}

A comma separated list of options, like {a,b} or {a,{b,c}} or {,a,}.

/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
// {x..y[..incr]}

A numeric sequence from x to y inclusive, with optional increment. If x or y start with a leading 0, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too.

/^-?\d+\.\.-?\d+(\.\.-?\d+)?$/
// {x..y[..incr]}

An alphabetic sequence from x to y inclusive, with optional increment. x and y must be exactly one character, and if given, incr must be a number.

For compatibility reasons, the string ${ is not eligible for brace expansion.

Installation

With npm do:

npm install brace-expansion

Sponsors

This module is proudly supported by my Sponsors!

Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my Patreon. Not sure how much of my modules you’re using? Try feross/thanks!



mime-db

NPM Version NPM Downloads Node.js Version Build Status Coverage Status

This is a database of all mime types. It consists of a single, public JSON file and does not include any logic, allowing it to remain as un-opinionated as possible with an API. It aggregates data from the following sources:

Installation

npm install mime-db

Database Download

If you’re crazy enough to use this in the browser, you can just grab the JSON file using jsDelivr. It is recommended to replace master with a release tag as the JSON format may change in the future.

https://cdn.jsdelivr.net/gh/jshttp/mime-db@master/db.json

Usage

var db = require('mime-db')

// grab data on .js files
var data = db['application/javascript']

Data Structure

The JSON file is a map lookup for lowercased mime types. Each mime type has the following properties:

If unknown, every property could be undefined.

Contributing

To edit the database, only make PRs against src/custom.json or src/custom-suffix.json.

The src/custom.json file is a JSON object with the MIME type as the keys and the values being an object with the following keys:

To update the build, run npm run build.

Adding Custom Media Types

The best way to get new media types included in this library is to register them with the IANA. The community registration procedure is outlined in RFC 6838 section 5. Types registered with the IANA are automatically pulled into this library.

If that is not possible / feasible, they can be added directly here as a “custom” type. To do this, it is required to have a primary source that definitively lists the media type. If an extension is going to be listed as associateed with this media type, the source must definitively link the media type and extension as well.



mime-db

NPM Version NPM Downloads Node.js Version Build Status Coverage Status

This is a database of all mime types. It consists of a single, public JSON file and does not include any logic, allowing it to remain as un-opinionated as possible with an API. It aggregates data from the following sources:

Installation

npm install mime-db

Database Download

If you’re crazy enough to use this in the browser, you can just grab the JSON file using jsDelivr. It is recommended to replace master with a release tag as the JSON format may change in the future.

https://cdn.jsdelivr.net/gh/jshttp/mime-db@master/db.json

Usage

var db = require('mime-db')

// grab data on .js files
var data = db['application/javascript']

Data Structure

The JSON file is a map lookup for lowercased mime types. Each mime type has the following properties:

If unknown, every property could be undefined.

Contributing

To edit the database, only make PRs against src/custom.json or src/custom-suffix.json.

The src/custom.json file is a JSON object with the MIME type as the keys and the values being an object with the following keys:

To update the build, run npm run build.

Adding Custom Media Types

The best way to get new media types included in this library is to register them with the IANA. The community registration procedure is outlined in RFC 6838 section 5. Types registered with the IANA are automatically pulled into this library.

If that is not possible / feasible, they can be added directly here as a “custom” type. To do this, it is required to have a primary source that definitively lists the media type. If an extension is going to be listed as associateed with this media type, the source must definitively link the media type and extension as well.



accepts

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Higher level content negotiation based on negotiator. Extracted from koa for general use.

In addition to negotiator, it allows:

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install accepts

API

var accepts = require('accepts')

accepts(req)

Create a new Accepts object for the given req.

.charset(charsets)

Return the first accepted charset. If nothing in charsets is accepted, then false is returned.

.charsets()

Return the charsets that the request accepts, in the order of the client’s preference (most preferred first).

.encoding(encodings)

Return the first accepted encoding. If nothing in encodings is accepted, then false is returned.

.encodings()

Return the encodings that the request accepts, in the order of the client’s preference (most preferred first).

.language(languages)

Return the first accepted language. If nothing in languages is accepted, then false is returned.

.languages()

Return the languages that the request accepts, in the order of the client’s preference (most preferred first).

.type(types)

Return the first accepted type (and it is returned as the same text as what appears in the types array). If nothing in types is accepted, then false is returned.

The types array can contain full MIME types or file extensions. Any value that is not a full MIME types is passed to require('mime-types').lookup.

.types()

Return the types that the request accepts, in the order of the client’s preference (most preferred first).

Examples

Simple type negotiation

This simple example shows how to use accepts to return a different typed respond body based on what the client wants to accept. The server lists it’s preferences in order and will get back the best match between the client and server.

var accepts = require('accepts')
var http = require('http')

function app (req, res) {
  var accept = accepts(req)

  // the order of this list is significant; should be server preferred order
  switch (accept.type(['json', 'html'])) {
    case 'json':
      res.setHeader('Content-Type', 'application/json')
      res.write('{"hello":"world!"}')
      break
    case 'html':
      res.setHeader('Content-Type', 'text/html')
      res.write('<b>hello, world!</b>')
      break
    default:
      // the fallback is text/plain, so no need to specify it above
      res.setHeader('Content-Type', 'text/plain')
      res.write('hello, world!')
      break
  }

  res.end()
}

http.createServer(app).listen(3000)

You can test this out with the cURL program:

curl -I -H'Accept: text/html' http://localhost:3000/


parseurl

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Parse a URL with memoization.

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install parseurl

API

var parseurl = require('parseurl')

parseurl(req)

Parse the URL of the given request object (looks at the req.url property) and return the result. The result is the same as url.parse in Node.js core. Calling this function multiple times on the same req where req.url does not change will return a cached parsed object, rather than parsing again.

parseurl.original(req)

Parse the original URL of the given request object and return the result. This works by trying to parse req.originalUrl if it is a string, otherwise parses req.url. The result is the same as url.parse in Node.js core. Calling this function multiple times on the same req where req.originalUrl does not change will return a cached parsed object, rather than parsing again.

Benchmark

$ npm run-script bench

> parseurl@1.3.3 bench nodejs-parseurl
> node benchmark/index.js

  http_parser@2.8.0
  node@10.6.0
  v8@6.7.288.46-node.13
  uv@1.21.0
  zlib@1.2.11
  ares@1.14.0
  modules@64
  nghttp2@1.32.0
  napi@3
  openssl@1.1.0h
  icu@61.1
  unicode@10.0
  cldr@33.0
  tz@2018c

> node benchmark/fullurl.js

  Parsing URL "http://localhost:8888/foo/bar?user=tj&pet=fluffy"

  4 tests completed.

  fasturl            x 2,207,842 ops/sec ±3.76% (184 runs sampled)
  nativeurl - legacy x   507,180 ops/sec ±0.82% (191 runs sampled)
  nativeurl - whatwg x   290,044 ops/sec ±1.96% (189 runs sampled)
  parseurl           x   488,907 ops/sec ±2.13% (192 runs sampled)

> node benchmark/pathquery.js

  Parsing URL "/foo/bar?user=tj&pet=fluffy"

  4 tests completed.

  fasturl            x 3,812,564 ops/sec ±3.15% (188 runs sampled)
  nativeurl - legacy x 2,651,631 ops/sec ±1.68% (189 runs sampled)
  nativeurl - whatwg x   161,837 ops/sec ±2.26% (189 runs sampled)
  parseurl           x 4,166,338 ops/sec ±2.23% (184 runs sampled)

> node benchmark/samerequest.js

  Parsing URL "/foo/bar?user=tj&pet=fluffy" on same request object

  4 tests completed.

  fasturl            x  3,821,651 ops/sec ±2.42% (185 runs sampled)
  nativeurl - legacy x  2,651,162 ops/sec ±1.90% (187 runs sampled)
  nativeurl - whatwg x    175,166 ops/sec ±1.44% (188 runs sampled)
  parseurl           x 14,912,606 ops/sec ±3.59% (183 runs sampled)

> node benchmark/simplepath.js

  Parsing URL "/foo/bar"

  4 tests completed.

  fasturl            x 12,421,765 ops/sec ±2.04% (191 runs sampled)
  nativeurl - legacy x  7,546,036 ops/sec ±1.41% (188 runs sampled)
  nativeurl - whatwg x    198,843 ops/sec ±1.83% (189 runs sampled)
  parseurl           x 24,244,006 ops/sec ±0.51% (194 runs sampled)

> node benchmark/slash.js

  Parsing URL "/"

  4 tests completed.

  fasturl            x 17,159,456 ops/sec ±3.25% (188 runs sampled)
  nativeurl - legacy x 11,635,097 ops/sec ±3.79% (184 runs sampled)
  nativeurl - whatwg x    240,693 ops/sec ±0.83% (189 runs sampled)
  parseurl           x 42,279,067 ops/sec ±0.55% (190 runs sampled)


is-extendable NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value is a plain object, array or function.

Install

Install with npm:

$ npm install --save is-extendable

Usage

var isExtendable = require('is-extendable');

Returns true if the value is any of the following:

Notes

All objects in JavaScript can have keys, but it’s a pain to check for this, since we ether need to verify that the value is not null or undefined and:

Also note that an extendable object is not the same as an extensible object, which is one that (in es6) is not sealed, frozen, or marked as non-extensible using preventExtensions.

Release history

v1.0.0 - 2017/07/20

Breaking changes

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 20, 2017. # is-extendable NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value is a plain object, array or function.

Install

Install with npm:

$ npm install --save is-extendable

Usage

var isExtendable = require('is-extendable');

Returns true if the value is any of the following:

Notes

All objects in JavaScript can have keys, but it’s a pain to check for this, since we ether need to verify that the value is not null or undefined and:

Also note that an extendable object is not the same as an extensible object, which is one that (in es6) is not sealed, frozen, or marked as non-extensible using preventExtensions.

Release history

v1.0.0 - 2017/07/20

Breaking changes

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 20, 2017. # node-error-ex Travis-CI.org Build Status Coveralls.io Coverage Rating > Easily subclass and customize new Error types

Examples

To include in your project:

var errorEx = require('error-ex');

To create an error message type with a specific name (note, that ErrorFn.name will not reflect this):

var JSONError = errorEx('JSONError');

var err = new JSONError('error');
err.name; //-> JSONError
throw err; //-> JSONError: error

To add a stack line:

var JSONError = errorEx('JSONError', {fileName: errorEx.line('in %s')});

var err = new JSONError('error')
err.fileName = '/a/b/c/foo.json';
throw err; //-> (line 2)-> in /a/b/c/foo.json

To append to the error message:

var JSONError = errorEx('JSONError', {fileName: errorEx.append('in %s')});

var err = new JSONError('error');
err.fileName = '/a/b/c/foo.json';
throw err; //-> JSONError: error in /a/b/c/foo.json

API

errorEx([name], [properties])

Creates a new ErrorEx error type

Returns a constructor (Function) that can be used just like the regular Error constructor.

var errorEx = require('error-ex');

var BasicError = errorEx();

var NamedError = errorEx('NamedError');

// --

var AdvancedError = errorEx('AdvancedError', {
    foo: {
        line: function (value, stack) {
            if (value) {
                return 'bar ' + value;
            }
            return null;
        }
    }
}

var err = new AdvancedError('hello, world');
err.foo = 'baz';
throw err;

/*
    AdvancedError: hello, world
        bar baz
        at tryReadme() (readme.js:20:1)
*/

errorEx.line(str)

Creates a stack line using a delimiter

This is a helper function. It is to be used in lieu of writing a value object for properties values.

var errorEx = require('error-ex');

var FileError = errorEx('FileError', {fileName: errorEx.line('in %s')});

var err = new FileError('problem reading file');
err.fileName = '/a/b/c/d/foo.js';
throw err;

/*
    FileError: problem reading file
        in /a/b/c/d/foo.js
        at tryReadme() (readme.js:7:1)
*/

errorEx.append(str)

Appends to the error.message string

This is a helper function. It is to be used in lieu of writing a value object for properties values.

var errorEx = require('error-ex');

var SyntaxError = errorEx('SyntaxError', {fileName: errorEx.append('in %s')});

var err = new SyntaxError('improper indentation');
err.fileName = '/a/b/c/d/foo.js';
throw err;

/*
    SyntaxError: improper indentation in /a/b/c/d/foo.js
        at tryReadme() (readme.js:7:1)
*/


ci-info

Get details about the current Continuous Integration environment.

Please open an issue if your CI server isn’t properly detected :)

npm Build status js-standard-style

Installation

npm install ci-info --save

Usage

var ci = require('ci-info')

if (ci.isCI) {
  console.log('The name of the CI server is:', ci.name)
} else {
  console.log('This program is not running on a CI server')
}

Officially supported CI servers:

Name Constant isPR
AWS CodeBuild ci.CODEBUILD 🚫
AppVeyor ci.APPVEYOR
Azure Pipelines ci.AZURE_PIPELINES
ci.BAMBOO 🚫
Bitbucket Pipelines ci.BITBUCKET
Bitrise ci.BITRISE
Buddy ci.BUDDY
Buildkite ci.BUILDKITE
CircleCI ci.CIRCLE
Cirrus CI ci.CIRRUS
Codeship ci.CODESHIP 🚫
Drone ci.DRONE
dsari ci.DSARI 🚫
GitLab CI ci.GITLAB 🚫
GoCD ci.GOCD 🚫
Hudson ci.HUDSON 🚫
Jenkins CI ci.JENKINS
Magnum CI ci.MAGNUM 🚫
Netlify CI ci.NETLIFY
Sail CI ci.SAIL
Semaphore ci.SEMAPHORE
Shippable ci.SHIPPABLE
Solano CI ci.SOLANO
Strider CD ci.STRIDER 🚫
TaskCluster ci.TASKCLUSTER 🚫
TeamCity by JetBrains ci.TEAMCITY 🚫
Travis CI ci.TRAVIS

API

ci.name

Returns a string containing name of the CI server the code is running on. If CI server is not detected, it returns null.

Don’t depend on the value of this string not to change for a specific vendor. If you find your self writing ci.name === 'Travis CI', you most likely want to use ci.TRAVIS instead.

ci.isCI

Returns a boolean. Will be true if the code is running on a CI server, otherwise false.

Some CI servers not listed here might still trigger the ci.isCI boolean to be set to true if they use certain vendor neutral environment variables. In those cases ci.name will be null and no vendor specific boolean will be set to true.

ci.isPR

Returns a boolean if PR detection is supported for the current CI server. Will be true if a PR is being tested, otherwise false. If PR detection is not supported for the current CI server, the value will be null.

ci.<VENDOR-CONSTANT>

A vendor specific boolean constant is exposed for each support CI vendor. A constant will be true if the code is determined to run on the given CI server, otherwise false.

Examples of vendor constants are ci.TRAVIS or ci.APPVEYOR. For a complete list, see the support table above.

Deprecated vendor constants that will be removed in the next major release:



node-libs-browser

The node core libs for in-browser usage.

NOTE: This library is deprecated and won’t accept Pull Requests that include Breaking Changes or new Features. Only bugfixes are accepted.

dependencies status

Exports a hash object of absolute paths to each lib, keyed by lib names. Modules without browser replacements are null.

Some modules have mocks in the mock directory. These are replacements with minimal functionality.

lib name browser implementation mock implementation
assert defunctzombie/commonjs-assert
buffer feross/buffer buffer.js
child_process
cluster
console Raynos/console-browserify console.js
constants juliangruber/constants-browserify
crypto crypto-browserify/crypto-browserify
dgram
dns dns.js
domain bevry/domain-browser
events Gozala/events
fs
http jhiesey/stream-http
https substack/https-browserify
module
net net.js
os CoderPuppy/os-browserify
path substack/path-browserify
process shtylman/node-process process.js
punycode bestiejs/punycode.js
querystring mike-spainhower/querystring
readline
repl
stream substack/stream-browserify
string_decoder rvagg/string_decoder
sys defunctzombie/node-util
timers jryans/timers-browserify
tls tls.js
tty substack/tty-browserify tty.js
url defunctzombie/node-url
util defunctzombie/node-util
vm substack/vm-browserify
zlib devongovett/browserify-zlib

Outdated versions

buffer

The current buffer implementation uses feross/buffer@4.x because feross/buffer@5.x relies on typed arrays. This will be dropped as soon as IE9 is not a typical browser target anymore.

punycode

The current punycode implementation uses bestiejs/punycode.js@1.x because bestiejs/punycode.js@2.x requires modern JS engines that understand const and let. It will be removed someday since it has already been deprecated from the node API.

Prettier Banner
Prettier Banner

Opinionated Code Formatter

JavaScript · TypeScript · Flow · JSX · JSON
CSS · SCSS · Less
HTML · Vue · Angular
GraphQL · Markdown · YAML
Your favorite language?

Github Actions Build Status Github Actions Build Status Github Actions Build Status Codecov Coverage Status Blazing Fast
npm version weekly downloads from npm code style: prettier Chat on Gitter Follow Prettier on Twitter

Intro

Prettier is an opinionated code formatter. It enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.

Input

foo(reallyLongArg(), omgSoManyParameters(), IShouldRefactorThis(), isThereSeriouslyAnotherOne());

Output

foo(
  reallyLongArg(),
  omgSoManyParameters(),
  IShouldRefactorThis(),
  isThereSeriouslyAnotherOne()
);

Prettier can be run in your editor on-save, in a pre-commit hook, or in CI environments to ensure your codebase has a consistent style without devs ever having to post a nit-picky comment on a code review ever again!


Documentation

Install · Options · CLI · API

Playground


Badge

Show the world you’re using Prettiercode style: prettier

[![code style: prettier](https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square)](https://github.com/prettier/prettier)

Contributing

See CONTRIBUTING.md.



etag

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Create simple HTTP ETags

This module generates HTTP ETags (as defined in RFC 7232) for use in HTTP responses.

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install etag

API

var etag = require('etag')

etag(entity, options)

Generate a strong ETag for the given entity. This should be the complete body of the entity. Strings, Buffers, and fs.Stats are accepted. By default, a strong ETag is generated except for fs.Stats, which will generate a weak ETag (this can be overwritten by options.weak).

res.setHeader('ETag', etag(body))

Options

etag accepts these properties in the options object.

weak

Specifies if the generated ETag will include the weak validator mark (that is, the leading W/). The actual entity tag is the same. The default value is false, unless the entity is fs.Stats, in which case it is true.

Testing

$ npm test

Benchmark

$ npm run-script bench

> etag@1.8.1 bench nodejs-etag
> node benchmark/index.js

  http_parser@2.7.0
  node@6.11.1
  v8@5.1.281.103
  uv@1.11.0
  zlib@1.2.11
  ares@1.10.1-DEV
  icu@58.2
  modules@48
  openssl@1.0.2k

> node benchmark/body0-100b.js

  100B body

  4 tests completed.

  buffer - strong x 258,647 ops/sec ±1.07% (180 runs sampled)
  buffer - weak   x 263,812 ops/sec ±0.61% (184 runs sampled)
  string - strong x 259,955 ops/sec ±1.19% (185 runs sampled)
  string - weak   x 264,356 ops/sec ±1.09% (184 runs sampled)

> node benchmark/body1-1kb.js

  1KB body

  4 tests completed.

  buffer - strong x 189,018 ops/sec ±1.12% (182 runs sampled)
  buffer - weak   x 190,586 ops/sec ±0.81% (186 runs sampled)
  string - strong x 144,272 ops/sec ±0.96% (188 runs sampled)
  string - weak   x 145,380 ops/sec ±1.43% (187 runs sampled)

> node benchmark/body2-5kb.js

  5KB body

  4 tests completed.

  buffer - strong x 92,435 ops/sec ±0.42% (188 runs sampled)
  buffer - weak   x 92,373 ops/sec ±0.58% (189 runs sampled)
  string - strong x 48,850 ops/sec ±0.56% (186 runs sampled)
  string - weak   x 49,380 ops/sec ±0.56% (190 runs sampled)

> node benchmark/body3-10kb.js

  10KB body

  4 tests completed.

  buffer - strong x 55,989 ops/sec ±0.93% (188 runs sampled)
  buffer - weak   x 56,148 ops/sec ±0.55% (190 runs sampled)
  string - strong x 27,345 ops/sec ±0.43% (188 runs sampled)
  string - weak   x 27,496 ops/sec ±0.45% (190 runs sampled)

> node benchmark/body4-100kb.js

  100KB body

  4 tests completed.

  buffer - strong x 7,083 ops/sec ±0.22% (190 runs sampled)
  buffer - weak   x 7,115 ops/sec ±0.26% (191 runs sampled)
  string - strong x 3,068 ops/sec ±0.34% (190 runs sampled)
  string - weak   x 3,096 ops/sec ±0.35% (190 runs sampled)

> node benchmark/stats.js

  stat

  4 tests completed.

  real - strong x 871,642 ops/sec ±0.34% (189 runs sampled)
  real - weak   x 867,613 ops/sec ±0.39% (190 runs sampled)
  fake - strong x 401,051 ops/sec ±0.40% (189 runs sampled)
  fake - weak   x 400,100 ops/sec ±0.47% (188 runs sampled)


hosted-git-info

This will let you identify and transform various git hosts URLs between protocols. It also can tell you what the URL is for the raw path for particular file for direct access without git.

Example

var hostedGitInfo = require("hosted-git-info")
var info = hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git", opts)
/* info looks like:
{
  type: "github",
  domain: "github.com",
  user: "npm",
  project: "hosted-git-info"
}
*/

If the URL can’t be matched with a git host, null will be returned. We can match git, ssh and https urls. Additionally, we can match ssh connect strings (git@github.com:npm/hosted-git-info) and shortcuts (eg, github:npm/hosted-git-info). Github specifically, is detected in the case of a third, unprefixed, form: npm/hosted-git-info.

If it does match, the returned object has properties of:

Version Contract

The major version will be bumped any time…

Implications:

Usage

var info = hostedGitInfo.fromUrl(gitSpecifier[, options])

Methods

All of the methods take the same options as the fromUrl factory. Options provided to a method override those provided to the constructor.

Given the path of a file relative to the repository, returns a URL for directly fetching it from the githost. If no committish was set then master will be used as the default.

For example hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git#v1.0.0").file("package.json") would return https://raw.githubusercontent.com/npm/hosted-git-info/v1.0.0/package.json

eg, github:npm/hosted-git-info

eg, https://github.com/npm/hosted-git-info/tree/v1.2.0, https://github.com/npm/hosted-git-info/tree/v1.2.0/package.json, https://github.com/npm/hosted-git-info/tree/v1.2.0/REAMDE.md#supported-hosts

eg, https://github.com/npm/hosted-git-info/issues

eg, https://github.com/npm/hosted-git-info/tree/v1.2.0#readme

eg, git+https://github.com/npm/hosted-git-info.git

eg, git+ssh://git@github.com/npm/hosted-git-info.git

eg, git@github.com:npm/hosted-git-info.git

eg, npm/hosted-git-info

eg, https://github.com/npm/hosted-git-info/archive/v1.2.0.tar.gz

Returns the default output type. The default output type is based on the string you passed in to be parsed

Uses the getDefaultRepresentation to call one of the other methods to get a URL for this resource. As such hostedGitInfo.fromUrl(url).toString() will give you a normalized version of the URL that still uses the same protocol.

Shortcuts will still be returned as shortcuts, but the special case github form of org/project will be normalized to github:org/project.

SSH connect strings will be normalized into git+ssh URLs.

Currently this supports Github, Bitbucket and Gitlab. Pull requests for additional hosts welcome.



hosted-git-info

This will let you identify and transform various git hosts URLs between protocols. It also can tell you what the URL is for the raw path for particular file for direct access without git.

Example

var hostedGitInfo = require("hosted-git-info")
var info = hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git", opts)
/* info looks like:
{
  type: "github",
  domain: "github.com",
  user: "npm",
  project: "hosted-git-info"
}
*/

If the URL can’t be matched with a git host, null will be returned. We can match git, ssh and https urls. Additionally, we can match ssh connect strings (git@github.com:npm/hosted-git-info) and shortcuts (eg, github:npm/hosted-git-info). Github specifically, is detected in the case of a third, unprefixed, form: npm/hosted-git-info.

If it does match, the returned object has properties of:

Version Contract

The major version will be bumped any time…

Implications:

Usage

var info = hostedGitInfo.fromUrl(gitSpecifier[, options])

Methods

All of the methods take the same options as the fromUrl factory. Options provided to a method override those provided to the constructor.

Given the path of a file relative to the repository, returns a URL for directly fetching it from the githost. If no committish was set then master will be used as the default.

For example hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git#v1.0.0").file("package.json") would return https://raw.githubusercontent.com/npm/hosted-git-info/v1.0.0/package.json

eg, github:npm/hosted-git-info

eg, https://github.com/npm/hosted-git-info/tree/v1.2.0, https://github.com/npm/hosted-git-info/tree/v1.2.0/package.json, https://github.com/npm/hosted-git-info/tree/v1.2.0/REAMDE.md#supported-hosts

eg, https://github.com/npm/hosted-git-info/issues

eg, https://github.com/npm/hosted-git-info/tree/v1.2.0#readme

eg, git+https://github.com/npm/hosted-git-info.git

eg, git+ssh://git@github.com/npm/hosted-git-info.git

eg, git@github.com:npm/hosted-git-info.git

eg, npm/hosted-git-info

eg, https://github.com/npm/hosted-git-info/archive/v1.2.0.tar.gz

Returns the default output type. The default output type is based on the string you passed in to be parsed

Uses the getDefaultRepresentation to call one of the other methods to get a URL for this resource. As such hostedGitInfo.fromUrl(url).toString() will give you a normalized version of the URL that still uses the same protocol.

Shortcuts will still be returned as shortcuts, but the special case github form of org/project will be normalized to github:org/project.

SSH connect strings will be normalized into git+ssh URLs.

Currently this supports Github, Bitbucket and Gitlab. Pull requests for additional hosts welcome.



Regular Expression Tokenizer

Tokenizes strings that represent a regular expressions.

Build Status Dependency Status codecov



Usage

var ret = require('ret');

var tokens = ret(/foo|bar/.source);

tokens will contain the following object

{
  "type": ret.types.ROOT
  "options": [
    [ { "type": ret.types.CHAR, "value", 102 },
      { "type": ret.types.CHAR, "value", 111 },
      { "type": ret.types.CHAR, "value", 111 } ],
    [ { "type": ret.types.CHAR, "value",  98 },
      { "type": ret.types.CHAR, "value",  97 },
      { "type": ret.types.CHAR, "value", 114 } ]
  ]
}


Token Types

ret.types is a collection of the various token types exported by ret.

ROOT

Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe | character. In that case, the token will have an options key that will be an array of arrays of tokens. If not, it will contain a stack key that is an array of tokens.

{
  "type": ret.types.ROOT,
  "stack": [token1, token2...],
}
{
  "type": ret.types.ROOT,
  "options" [
    [token1, token2...],
    [othertoken1, othertoken2...]
    ...
  ],
}

GROUP

Groups contain tokens that are inside of a parenthesis. If the group begins with ? followed by another character, it’s a special type of group. A ‘:’ tells the group not to be remembered when exec is used. ‘=’ means the previous token matches only if followed by this group, and ‘!’ means the previous token matches only if NOT followed.

Like root, it can contain an options key instead of stack if there is a pipe.

{
  "type": ret.types.GROUP,
  "remember" true,
  "followedBy": false,
  "notFollowedBy": false,
  "stack": [token1, token2...],
}
{
  "type": ret.types.GROUP,
  "remember" true,
  "followedBy": false,
  "notFollowedBy": false,
  "options" [
    [token1, token2...],
    [othertoken1, othertoken2...]
    ...
  ],
}

POSITION

\b, \B, ^, and $ specify positions in the regexp.

{
  "type": ret.types.POSITION,
  "value": "^",
}

SET

Contains a key set specifying what tokens are allowed and a key not specifying if the set should be negated. A set can contain other sets, ranges, and characters.

{
  "type": ret.types.SET,
  "set": [token1, token2...],
  "not": false,
}

RANGE

Used in set tokens to specify a character range. from and to are character codes.

{
  "type": ret.types.RANGE,
  "from": 97,
  "to": 122,
}

REPETITION

{
  "type": ret.types.REPETITION,
  "min": 0,
  "max": Infinity,
  "value": token,
}

REFERENCE

References a group token. value is 1-9.

{
  "type": ret.types.REFERENCE,
  "value": 1,
}

CHAR

Represents a single character token. value is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it’s simpler to do it this way.

{
  "type": ret.types.CHAR,
  "value": 123,
}

Errors

ret.js will throw errors if given a string with an invalid regular expression. All possible errors are



Install

npm install ret


Tests

Tests are written with vows

npm test




babel-eslint npm travis npm-downloads

babel-eslint allows you to lint ALL valid Babel code with the fantastic ESLint.

Why Use babel-eslint

You only need to use babel-eslint if you are using types (Flow) or experimental features not supported in ESLint itself yet. Otherwise try the default parser (you don’t have to use it just because you are using Babel).


If there is an issue, first check if it can be reproduced with the regular parser or with the latest versions of eslint and babel-eslint!

For questions and support please visit the #discussion babel slack channel (sign up here) or eslint gitter!

Note that the ecmaFeatures config property may still be required for ESLint to work properly with features not in ECMAScript 5 by default. Examples are globalReturn and modules).

Known Issues

Flow: > Check out eslint-plugin-flowtype: An eslint plugin that makes flow type annotations global variables and marks declarations as used. Solves the problem of false positives with no-undef and no-unused-vars. - no-undef for global flow types: ReactElement, ReactClass #130 - Workaround: define types as globals in .eslintrc or define types and import them import type ReactElement from './types' - no-unused-vars/no-undef with Flow declarations (declare module A {}) #132

Modules/strict mode - no-unused-vars: [2, {vars: local}] #136

Please check out eslint-plugin-react for React/JSX issues - no-unused-vars with jsx

Please check out eslint-plugin-babel for other issues

How does it work?

ESLint allows custom parsers. This is great but some of the syntax nodes that Babel supports aren’t supported by ESLint. When using this plugin, ESLint is monkeypatched and your code is transformed into code that ESLint can understand. All location info such as line numbers, columns is also retained so you can track down errors with ease.

Basically babel-eslint exports an index.js that a linter can use. It just needs to export a parse method that takes in a string of code and outputs an AST.

Usage

ESLint babel-eslint
4.x >= 6.x
3.x >= 6.x
2.x >= 6.x
1.x >= 5.x

Install

Ensure that you have substituted the correct version lock for eslint and babel-eslint into this command:

$ npm install eslint@4.x babel-eslint@8 --save-dev
# or
$ yarn add eslint@4.x babel-eslint@8 -D

Setup

.eslintrc

{
  "parser": "babel-eslint",
  "rules": {
    "strict": 0
  }
}

Check out the ESLint docs for all possible rules.

Configuration

.eslintrc

{
  "parser": "babel-eslint",
  "parserOptions": {
    "sourceType": "module",
    "allowImportExportEverywhere": false,
    "codeFrame": true
  }
}

Run

$ eslint your-files-here


@datastructures-js/stack

build:? npm npm npm

A wrapper around javascript array push/pop with a standard stack interface.



Table of Contents

Install

npm install --save @datastructures-js/stack

API

require

const Stack = require('@datastructures-js/stack');

import

import Stack from '@datastructures-js/stack';

Construction

using “new Stack(array)”

Example
// empty stack
const stack = new Stack();

// from an array
const stack = new Stack([10, 3, 8, 40, 1]);

using “Stack.fromArray(array)”

Example
// empty stack
const stack = Stack.fromArray([]);

// with elements
const list = [10, 3, 8, 40, 1];
const stack = Stack.fromArray(list);

// If the list should not be mutated, simply construct the stack from a copy of it.
const stack = Stack.fromArray(list.slice(0));

.push(element)

push an element to the top of the stack.

params
name type
element object
runtime
O(1)

Example

stack.push('test');

.peek()

returns the top element in the stack.

return
object
runtime
O(1)

Example

console.log(stack.peek()); // test

.pop()

removes and returns the top element of the stack.

return
object
runtime
O(1)

Example

console.log(stack.pop()); // test
console.log(stack.peek()); // null

.isEmpty()

checks if the stack is empty.

return
boolean
runtime
O(1)

Example

stack.push('test');
console.log(stack.isEmpty()); // false

.size()

returns the number of elements in the stack.

return
number
runtime
O(1)

Example

console.log(stack.size()); // 1

.clone()

creates a shallow copy of the stack.

return
Stack
runtime
O(n)

Example

const stack = Stack.fromArray([{ id: 2 }, { id: 4 } , { id: 8 }]);
const clone =  stack.clone();

clone.pop();

console.log(stack.peek()); // { id: 8 }
console.log(clone.peek()); // { id: 4 }

.toArray()

returns a copy of the remaining elements as an array.

return
array
runtime
O(n)
console.log(stack.toArray()); // [{ id: 2 }, { id: 4 } , { id: 8 }]

.clear()

clears all elements from the stack.

runtime
O(1)

Example

stack.clear();
stack.size(); // 0

Build

grunt build

Google Cloud Platform logo



Google Cloud Common: Node.js Client

release level npm version codecov

Common components for Cloud APIs Node.js Client Libraries

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Installing the client library

npm install @google-cloud/common

It’s unlikely you will need to install this package directly, as it will be installed as a dependency when you install other @google-cloud packages.

The Google Cloud Common Node.js Client API Reference documentation also contains samples.

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

Legacy Node.js versions are supported as a best effort:

Legacy tags available

Versioning

This library follows Semantic Versioning.

This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

Apache Version 2.0

See LICENSE



gaxios

npm version codecov Code Style: Google

An HTTP request client that provides an axios like interface over top of node-fetch.

Install

$ npm install gaxios

Example

const {request} = require('gaxios');
const res = await request({
  url: 'https://www.googleapis.com/discovery/v1/apis/'
});

Setting Defaults

Gaxios supports setting default properties both on the default instance, and on additional instances. This is often useful when making many requests to the same domain with the same base settings. For example:

const gaxios = require('gaxios');
gaxios.instance.defaults = {
  baseURL: 'https://example.com'
  headers: {
    Authorization: 'SOME_TOKEN'
  }
}
gaxios.request({url: '/data'}).then(...);

Request Options

{
  // The url to which the request should be sent.  Required.
  url: string,

  // The HTTP method to use for the request.  Defaults to `GET`.
  method: 'GET',

  // The base Url to use for the request. Prepended to the `url` property above.
  baseURL: 'https://example.com';

  // The HTTP methods to be sent with the request.
  headers: { 'some': 'header' },

  // The data to send in the body of the request. Data objects will be serialized as JSON.
  data: {
    some: 'data'
  },

  // The max size of the http response content in bytes allowed.
  // Defaults to `0`, which is the same as unset.
  maxContentLength: 2000,

  // The max number of HTTP redirects to follow.
  // Defaults to 100.
  maxRedirects: 100,

  // The querystring parameters that will be encoded using `qs` and
  // appended to the url
  params: {
    querystring: 'parameters'
  },

  // By default, we use the `querystring` package in node core to serialize
  // querystring parameters.  You can override that and provide your
  // own implementation.
  paramsSerializer: (params) => {
    return qs.stringify(params);
  },

  // The timeout for the HTTP request. Defaults to 0.
  timeout: 1000,

  // Optional method to override making the actual HTTP request. Useful
  // for writing tests and instrumentation
  adapter?: async (options, defaultAdapter) => {
    const res = await defaultAdapter(options);
    res.data = {
      ...res.data,
      extraProperty: 'your extra property',
    };
    return res;
  };

  // The expected return type of the request.  Options are:
  // json | stream | blob | arraybuffer | text
  // Defaults to `json`.
  responseType: 'json',

  // The node.js http agent to use for the request.
  agent: someHttpsAgent,

  // Custom function to determine if the response is valid based on the
  // status code.  Defaults to (>= 200 && < 300)
  validateStatus: (status: number) => true,

  // Configuration for retrying of requests.
  retryConfig: {
    // The number of times to retry the request.  Defaults to 3.
    retry?: number;

    // The number of retries already attempted.
    currentRetryAttempt?: number;

    // The HTTP Methods that will be automatically retried.
    // Defaults to ['GET','PUT','HEAD','OPTIONS','DELETE']
    httpMethodsToRetry?: string[];

    // The HTTP response status codes that will automatically be retried.
    // Defaults to: [[100, 199], [429, 429], [500, 599]]
    statusCodesToRetry?: number[][];

    // Function to invoke when a retry attempt is made.
    onRetryAttempt?: (err: GaxiosError) => Promise<void> | void;

    // Function to invoke which determines if you should retry
    shouldRetry?: (err: GaxiosError) => Promise<boolean> | boolean;

    // When there is no response, the number of retries to attempt. Defaults to 2.
    noResponseRetries?: number;

    // The amount of time to initially delay the retry, in ms.  Defaults to 100ms.
    retryDelay?: number;
  },

  // Enables default configuration for retries.
  retry: boolean,

  // Cancelling a request requires the `abort-controller` library.
  // See https://github.com/bitinn/node-fetch#request-cancellation-with-abortsignal
  signal?: AbortSignal
}

Apache-2.0



is-plain-object NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if an object was created by the Object constructor, or Object.create(null).

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-plain-object

Use isobject if you only want to check if the value is an object and not an array or null.

Usage

with es modules

import { isPlainObject } from 'is-plain-object';

or with commonjs

const { isPlainObject } = require('is-plain-object');

true when created by the Object constructor, or Object.create(null).

isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> true
isPlainObject(null);
//=> true

false when not created by the Object constructor.

isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(Object.create(null));
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
19 jonschlinkert
6 TrySound
6 stevenvachon
3 onokumus
1 wtgtybhertgeghgtwtg

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.8.0, on April 28, 2019.



posix-character-classes NPM version NPM monthly downloads NPM total downloads Linux Build Status

POSIX character classes for creating regular expressions.

Install

Install with npm:

$ npm install --save posix-character-classes

Install with yarn:

$ yarn add posix-character-classes

Usage

var posix = require('posix-character-classes');
console.log(posix.alpha);
//=> 'A-Za-z'

POSIX Character classes

The POSIX standard supports the following classes or categories of charactersh (note that classes must be defined within brackets)1:

POSIX class Equivalent to Matches
[:alnum:] [A-Za-z0-9] digits, uppercase and lowercase letters
[:alpha:] [A-Za-z] upper- and lowercase letters
[:ascii:] [\x00-\x7F] ASCII characters
[:blank:] [ \t] space and TAB characters only
[:cntrl:] [\x00-\x1F\x7F] Control characters
[:digit:] [0-9] digits
[:graph:] [^[:cntrl:]] graphic characters (all characters which have graphic representation)
[:lower:] [a-z] lowercase letters
[:print:] [[:graph] ] graphic characters and space
[:punct:] [-!"#$%&'()*+,./:;<=>?@[]^_`{ | }~] all punctuation characters (all graphic characters except letters and digits)
[:space:] [ \t\n\r\f\v] all blank (whitespace) characters, including spaces, tabs, new lines, carriage returns, form feeds, and vertical tabs
[:upper:] [A-Z] uppercase letters
[:word:] [A-Za-z0-9_] word characters
[:xdigit:] [0-9A-Fa-f] hexadecimal digits

Examples

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017.




proxy-addr

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Determine address of proxied request

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install proxy-addr

API

var proxyaddr = require('proxy-addr')

proxyaddr(req, trust)

Return the address of the request, using the given trust parameter.

The trust argument is a function that returns true if you trust the address, false if you don’t. The closest untrusted address is returned.

proxyaddr(req, function (addr) { return addr === '127.0.0.1' })
proxyaddr(req, function (addr, i) { return i < 1 })

The trust arugment may also be a single IP address string or an array of trusted addresses, as plain IP addresses, CIDR-formatted strings, or IP/netmask strings.

proxyaddr(req, '127.0.0.1')
proxyaddr(req, ['127.0.0.0/8', '10.0.0.0/8'])
proxyaddr(req, ['127.0.0.0/255.0.0.0', '192.168.0.0/255.255.0.0'])

This module also supports IPv6. Your IPv6 addresses will be normalized automatically (i.e. fe80::00ed:1 equals fe80:0:0:0:0:0:ed:1).

proxyaddr(req, '::1')
proxyaddr(req, ['::1/128', 'fe80::/10'])

This module will automatically work with IPv4-mapped IPv6 addresses as well to support node.js in IPv6-only mode. This means that you do not have to specify both ::ffff:a00:1 and 10.0.0.1.

As a convenience, this module also takes certain pre-defined names in addition to IP addresses, which expand into IP addresses:

proxyaddr(req, 'loopback')
proxyaddr(req, ['loopback', 'fc00:ac:1ab5:fff::1/64'])

When trust is specified as a function, it will be called for each address to determine if it is a trusted address. The function is given two arguments: addr and i, where addr is a string of the address to check and i is a number that represents the distance from the socket address.

proxyaddr.all(req, [trust])

Return all the addresses of the request, optionally stopping at the first untrusted. This array is ordered from closest to furthest (i.e. arr[0] === req.connection.remoteAddress).

proxyaddr.all(req)

The optional trust argument takes the same arguments as trust does in proxyaddr(req, trust).

proxyaddr.all(req, 'loopback')

proxyaddr.compile(val)

Compiles argument val into a trust function. This function takes the same arguments as trust does in proxyaddr(req, trust) and returns a function suitable for proxyaddr(req, trust).

var trust = proxyaddr.compile('loopback')
var addr = proxyaddr(req, trust)

This function is meant to be optimized for use against every request. It is recommend to compile a trust function up-front for the trusted configuration and pass that to proxyaddr(req, trust) for each request.

Testing

$ npm test

Benchmarks

$ npm run-script bench


is-number NPM version NPM downloads Build Status

Returns true if the value is a number. comprehensive tests.

Install

Install with npm:

$ npm install --save is-number

Usage

To understand some of the rationale behind the decisions made in this library (and to learn about some oddities of number evaluation in JavaScript), see this gist.

var isNumber = require('is-number');

true

See the tests for more examples.

isNumber(5e3)      //=> 'true'
isNumber(0xff)     //=> 'true'
isNumber(-1.1)     //=> 'true'
isNumber(0)        //=> 'true'
isNumber(1)        //=> 'true'
isNumber(1.1)      //=> 'true'
isNumber(10)       //=> 'true'
isNumber(10.10)    //=> 'true'
isNumber(100)      //=> 'true'
isNumber('-1.1')   //=> 'true'
isNumber('0')      //=> 'true'
isNumber('012')    //=> 'true'
isNumber('0xff')   //=> 'true'
isNumber('1')      //=> 'true'
isNumber('1.1')    //=> 'true'
isNumber('10')     //=> 'true'
isNumber('10.10')  //=> 'true'
isNumber('100')    //=> 'true'
isNumber('5e3')    //=> 'true'
isNumber(parseInt('012'))   //=> 'true'
isNumber(parseFloat('012')) //=> 'true'

False

See the tests for more examples.

isNumber('foo')             //=> 'false'
isNumber([1])               //=> 'false'
isNumber([])                //=> 'false'
isNumber(function () {})    //=> 'false'
isNumber(Infinity)          //=> 'false'
isNumber(NaN)               //=> 'false'
isNumber(new Array('abc'))  //=> 'false'
isNumber(new Array(2))      //=> 'false'
isNumber(new Buffer('abc')) //=> 'false'
isNumber(null)              //=> 'false'
isNumber(undefined)         //=> 'false'
isNumber({abc: 'abc'})      //=> 'false'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.1.30, on September 10, 2016. # is-number NPM version NPM downloads Build Status

Returns true if the value is a number. comprehensive tests.

Install

Install with npm:

$ npm install --save is-number

Usage

To understand some of the rationale behind the decisions made in this library (and to learn about some oddities of number evaluation in JavaScript), see this gist.

var isNumber = require('is-number');

true

See the tests for more examples.

isNumber(5e3)      //=> 'true'
isNumber(0xff)     //=> 'true'
isNumber(-1.1)     //=> 'true'
isNumber(0)        //=> 'true'
isNumber(1)        //=> 'true'
isNumber(1.1)      //=> 'true'
isNumber(10)       //=> 'true'
isNumber(10.10)    //=> 'true'
isNumber(100)      //=> 'true'
isNumber('-1.1')   //=> 'true'
isNumber('0')      //=> 'true'
isNumber('012')    //=> 'true'
isNumber('0xff')   //=> 'true'
isNumber('1')      //=> 'true'
isNumber('1.1')    //=> 'true'
isNumber('10')     //=> 'true'
isNumber('10.10')  //=> 'true'
isNumber('100')    //=> 'true'
isNumber('5e3')    //=> 'true'
isNumber(parseInt('012'))   //=> 'true'
isNumber(parseFloat('012')) //=> 'true'

False

See the tests for more examples.

isNumber('foo')             //=> 'false'
isNumber([1])               //=> 'false'
isNumber([])                //=> 'false'
isNumber(function () {})    //=> 'false'
isNumber(Infinity)          //=> 'false'
isNumber(NaN)               //=> 'false'
isNumber(new Array('abc'))  //=> 'false'
isNumber(new Array(2))      //=> 'false'
isNumber(new Buffer('abc')) //=> 'false'
isNumber(null)              //=> 'false'
isNumber(undefined)         //=> 'false'
isNumber({abc: 'abc'})      //=> 'false'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.1.30, on September 10, 2016.

Utils for ESLint Plugins

Utilities for working with TypeScript + ESLint together.

CI NPM Version NPM Downloads

Note

This package has inherited its version number from the @typescript-eslint project. Meaning that even though this package is 2.x.y, you shouldn’t expect 100% stability between minor version bumps. i.e. treat it as a 0.x.y package.

Feel free to use it now, and let us know what utilities you need or send us PRs with utilities you build on top of it.

Once it is stable, it will be renamed to @typescript-eslint/util for a 4.0.0 release.

Exports

Name Description
ASTUtils Tools for operating on the ESTree AST. Also includes the eslint-utils package, correctly typed to work with the types found in TSESTree
ESLintUtils Tools for creating ESLint rules with TypeScript.
JSONSchema Types from the @types/json-schema package, re-exported to save you having to manually import them. Also ensures you’re using the same version of the types as this package.
TSESLint Types for ESLint, correctly typed to work with the types found in TSESTree.
TSESLintScope The eslint-scope package, correctly typed to work with the types found in both TSESTree and TSESLint
TSESTree Types for the TypeScript flavor of ESTree created by @typescript-eslint/typescript-estree.
AST_NODE_TYPES An enum with the names of every single node found in TSESTree.
AST_TOKEN_TYPES An enum with the names of every single token found in TSESTree.
ParserServices Typing for the parser services provided when parsing a file using @typescript-eslint/typescript-estree.

Contributing

See the contributing guide here



graceful-fs

graceful-fs functions as a drop-in replacement for the fs module, making various improvements.

The improvements are meant to normalize behavior across different platforms and environments, and to make filesystem access more resilient to errors.

Improvements over fs module

USAGE

// use just like fs
var fs = require('graceful-fs')

// now go and do stuff with it...
fs.readFileSync('some-file-or-whatever')

Global Patching

If you want to patch the global fs module (or any other fs-like module) you can do this:

// Make sure to read the caveat below.
var realFs = require('fs')
var gracefulFs = require('graceful-fs')
gracefulFs.gracefulify(realFs)

This should only ever be done at the top-level application layer, in order to delay on EMFILE errors from any fs-using dependencies. You should not do this in a library, because it can cause unexpected delays in other parts of the program.

Changes

This module is fairly stable at this point, and used by a lot of things. That being said, because it implements a subtle behavior change in a core part of the node API, even modest changes can be extremely breaking, and the versioning is thus biased towards bumping the major when in doubt.

The main change between major versions has been switching between providing a fully-patched fs module vs monkey-patching the node core builtin, and the approach by which a non-monkey-patched fs was created.

The goal is to trade EMFILE errors for slower fs operations. So, if you try to open a zillion files, rather than crashing, open operations will be queued up and wait for something else to close.

There are advantages to each approach. Monkey-patching the fs means that no EMFILE errors can possibly occur anywhere in your application, because everything is using the same core fs module, which is patched. However, it can also obviously cause undesirable side-effects, especially if the module is loaded multiple times.

Implementing a separate-but-identical patched fs module is more surgical (and doesn’t run the risk of patching multiple times), but also imposes the challenge of keeping in sync with the core module.

The current approach loads the fs module, and then creates a lookalike object that has all the same methods, except a few that are patched. It is safe to use in all versions of Node from 0.8 through 7.0.

v4

v3

v2.1.0

v2.0

v1.1

1.0



arr-diff NPM version NPM monthly downloads Linux Build Status

Returns an array with only the unique values from the first array, by excluding all values from additional arrays using strict equality for comparisons.

Install

Install with npm:

$ npm install --save arr-diff

Install with yarn:

$ yarn add arr-diff

Install with bower

$ bower install arr-diff --save

Usage

Returns the difference between the first array and additional arrays.

var diff = require('arr-diff');

var a = ['a', 'b', 'c', 'd'];
var b = ['b', 'c'];

console.log(diff(a, b))
//=> ['a', 'd']

Benchmarks

This library versus array-differ, on April 14, 2017:

Benchmarking: (4 of 4)
 · long-dupes
 · long
 · med
 · short

# benchmark/fixtures/long-dupes.js (100804 bytes)
  arr-diff-3.0.0 x 822 ops/sec ±0.67% (86 runs sampled)
  arr-diff-4.0.0 x 2,141 ops/sec ±0.42% (89 runs sampled)
  array-differ x 708 ops/sec ±0.70% (89 runs sampled)

  fastest is arr-diff-4.0.0

# benchmark/fixtures/long.js (94529 bytes)
  arr-diff-3.0.0 x 882 ops/sec ±0.60% (87 runs sampled)
  arr-diff-4.0.0 x 2,329 ops/sec ±0.97% (83 runs sampled)
  array-differ x 769 ops/sec ±0.61% (90 runs sampled)

  fastest is arr-diff-4.0.0

# benchmark/fixtures/med.js (708 bytes)
  arr-diff-3.0.0 x 856,150 ops/sec ±0.42% (89 runs sampled)
  arr-diff-4.0.0 x 4,665,249 ops/sec ±1.06% (89 runs sampled)
  array-differ x 653,888 ops/sec ±1.02% (86 runs sampled)

  fastest is arr-diff-4.0.0

# benchmark/fixtures/short.js (60 bytes)
  arr-diff-3.0.0 x 3,078,467 ops/sec ±0.77% (93 runs sampled)
  arr-diff-4.0.0 x 9,213,296 ops/sec ±0.65% (89 runs sampled)
  array-differ x 1,337,051 ops/sec ±0.91% (92 runs sampled)

  fastest is arr-diff-4.0.0

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
33 jonschlinkert
2 paulmillr

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 14, 2017. # convert-source-map build status

become a patron

Converts a source-map from/to different formats and allows adding/changing properties.

var convert = require('convert-source-map');

var json = convert
  .fromComment('//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiYnVpbGQvZm9vLm1pbi5qcyIsInNvdXJjZXMiOlsic3JjL2Zvby5qcyJdLCJuYW1lcyI6W10sIm1hcHBpbmdzIjoiQUFBQSIsInNvdXJjZVJvb3QiOiIvIn0=')
  .toJSON();

var modified = convert
  .fromComment('//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiYnVpbGQvZm9vLm1pbi5qcyIsInNvdXJjZXMiOlsic3JjL2Zvby5qcyJdLCJuYW1lcyI6W10sIm1hcHBpbmdzIjoiQUFBQSIsInNvdXJjZVJvb3QiOiIvIn0=')
  .setProperty('sources', [ 'SRC/FOO.JS' ])
  .toJSON();

console.log(json);
console.log(modified);
{"version":3,"file":"build/foo.min.js","sources":["src/foo.js"],"names":[],"mappings":"AAAA","sourceRoot":"/"}
{"version":3,"file":"build/foo.min.js","sources":["SRC/FOO.JS"],"names":[],"mappings":"AAAA","sourceRoot":"/"}

API

fromObject(obj)

Returns source map converter from given object.

fromJSON(json)

Returns source map converter from given json string.

fromBase64(base64)

Returns source map converter from given base64 encoded json string.

fromComment(comment)

Returns source map converter from given base64 encoded json string prefixed with //# sourceMappingURL=....

fromMapFileComment(comment, mapFileDir)

Returns source map converter from given filename by parsing //# sourceMappingURL=filename.

filename must point to a file that is found inside the mapFileDir. Most tools store this file right next to the generated file, i.e. the one containing the source map.

fromSource(source)

Finds last sourcemap comment in file and returns source map converter or returns null if no source map comment was found.

fromMapFileSource(source, mapFileDir)

Finds last sourcemap comment in file and returns source map converter or returns null if no source map comment was found.

The sourcemap will be read from the map file found by parsing # sourceMappingURL=file comment. For more info see fromMapFileComment.

toObject()

Returns a copy of the underlying source map.

toJSON(space)

Converts source map to json string. If space is given (optional), this will be passed to JSON.stringify when the JSON string is generated.

toBase64()

Converts source map to base64 encoded json string.

toComment(options)

Converts source map to an inline comment that can be appended to the source-file.

By default, the comment is formatted like: //# sourceMappingURL=..., which you would normally see in a JS source file.

When options.multiline == true, the comment is formatted like: /*# sourceMappingURL=... */, which you would find in a CSS source file.

addProperty(key, value)

Adds given property to the source map. Throws an error if property already exists.

setProperty(key, value)

Sets given property to the source map. If property doesn’t exist it is added, otherwise its value is updated.

getProperty(key)

Gets given property of the source map.

removeComments(src)

Returns src with all source map comments removed

removeMapFileComments(src)

Returns src with all source map comments pointing to map files removed.

commentRegex

Provides a fresh RegExp each time it is accessed. Can be used to find source map comments.

mapFileCommentRegex

Provides a fresh RegExp each time it is accessed. Can be used to find source map comments pointing to map files.

generateMapFileComment(file, options)

Returns a comment that links to an external source map via file.

By default, the comment is formatted like: //# sourceMappingURL=..., which you would normally see in a JS source file.

When options.multiline == true, the comment is formatted like: /*# sourceMappingURL=... */, which you would find in a CSS source file.

Bitdeli Badge



regex-not NPM version NPM monthly downloads NPM total downloads Linux Build Status

Create a javascript regular expression for matching everything except for the given string.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save regex-not

Usage

var not = require('regex-not');

The main export is a function that takes a string an options object.

not(string[, options]);

Example

var not = require('regex-not');
console.log(not('foo'));
//=> /^(?:(?!^(?:foo)$).)+$/

Strict matching

By default, the returned regex is for strictly (not) matching the exact given pattern (in other words, “match this string if it does NOT exactly equal foo”):

var re = not('foo');
console.log(re.test('foo'));     //=> false
console.log(re.test('bar'));     //=> true
console.log(re.test('foobar'));  //=> true
console.log(re.test('barfoo'));  //=> true

.create

Returns a string to allow you to create your own regex:

console.log(not.create('foo'));
//=> '(?:(?!^(?:foo)$).)+'

Options

options.contains

You can relax strict matching by setting options.contains to true (in other words, “match this string if it does NOT contain foo”):

var re = not('foo');
console.log(re.test('foo', {contains: true}));     //=> false
console.log(re.test('bar', {contains: true}));     //=> true
console.log(re.test('foobar', {contains: true}));  //=> false
console.log(re.test('barfoo', {contains: true}));  //=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
9 jonschlinkert
1 doowb
1 EdwardBetts

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 19, 2018. # has-value NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value exists, false if empty. Works with deeply nested values using object paths.

Install

Install with npm:

$ npm install --save has-value

Works for:

Usage

Works with property values (supports object-path notation, like foo.bar) or a single value:

var hasValue = require('has-value');

hasValue('foo');
hasValue({foo: 'bar'}, 'foo');
hasValue({a: {b: {c: 'foo'}}}, 'a.b.c');
//=> true

hasValue('');
hasValue({foo: ''}, 'foo');
//=> false

hasValue(0);
hasValue(1);
hasValue({foo: 0}, 'foo');
hasValue({foo: 1}, 'foo');
hasValue({foo: null}, 'foo');
hasValue({foo: {bar: 'a'}}}, 'foo');
hasValue({foo: {bar: 'a'}}}, 'foo.bar');
//=> true

hasValue({foo: {}}}, 'foo');
hasValue({foo: {bar: {}}}}, 'foo.bar');
hasValue({foo: undefined}, 'foo');
//=> false

hasValue([]);
hasValue([[]]);
hasValue([[], []]);
hasValue([undefined]);
hasValue({foo: []}, 'foo');
//=> false

hasValue([0]);
hasValue([null]);
hasValue(['foo']);
hasValue({foo: ['a']}, 'foo');
//=> true

hasValue(function() {})
hasValue(function(foo) {})
hasValue({foo: function(foo) {}}, 'foo'); 
hasValue({foo: function() {}}, 'foo');
//=> true

hasValue(true);
hasValue(false);
hasValue({foo: true}, 'foo');
hasValue({foo: false}, 'foo');
//=> true

isEmpty

To do the opposite and test for empty values, do:

function isEmpty(o) {
  return !hasValue.apply(hasValue, arguments);
}

Release history

v1.0.0

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
17 jonschlinkert
2 rmharrison

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on May 19, 2017. # Acorn AST walker

An abstract syntax tree walker for the ESTree format.

Community

You are welcome to report bugs or create pull requests on github. For questions and discussion, please use the Tern discussion forum.

Installation

The easiest way to install acorn is from npm:

npm install acorn-walk

Alternately, you can download the source and build acorn yourself:

git clone https://github.com/acornjs/acorn.git
cd acorn
npm install

Interface

An algorithm for recursing through a syntax tree is stored as an object, with a property for each tree node type holding a function that will recurse through such a node. There are several ways to run such a walker.

simple(node, visitors, base, state) does a ‘simple’ walk over a tree. node should be the AST node to walk, and visitors an object with properties whose names correspond to node types in the ESTree spec. The properties should contain functions that will be called with the node object and, if applicable the state at that point. The last two arguments are optional. base is a walker algorithm, and state is a start state. The default walker will simply visit all statements and expressions and not produce a meaningful state. (An example of a use of state is to track scope at each point in the tree.)

const acorn = require("acorn")
const walk = require("acorn-walk")

walk.simple(acorn.parse("let x = 10"), {
  Literal(node) {
    console.log(`Found a literal: ${node.value}`)
  }
})

ancestor(node, visitors, base, state) does a ‘simple’ walk over a tree, building up an array of ancestor nodes (including the current node) and passing the array to the callbacks as a third parameter.

const acorn = require("acorn")
const walk = require("acorn-walk")

walk.ancestor(acorn.parse("foo('hi')"), {
  Literal(_, ancestors) {
    console.log("This literal's ancestors are:", ancestors.map(n => n.type))
  }
})

recursive(node, state, functions, base) does a ‘recursive’ walk, where the walker functions are responsible for continuing the walk on the child nodes of their target node. state is the start state, and functions should contain an object that maps node types to walker functions. Such functions are called with (node, state, c) arguments, and can cause the walk to continue on a sub-node by calling the c argument on it with (node, state) arguments. The optional base argument provides the fallback walker functions for node types that aren’t handled in the functions object. If not given, the default walkers will be used.

make(functions, base) builds a new walker object by using the walker functions in functions and filling in the missing ones by taking defaults from base.

full(node, callback, base, state) does a ‘full’ walk over a tree, calling the callback with the arguments (node, state, type) for each node

fullAncestor(node, callback, base, state) does a ‘full’ walk over a tree, building up an array of ancestor nodes (including the current node) and passing the array to the callbacks as a third parameter.

const acorn = require("acorn")
const walk = require("acorn-walk")

walk.full(acorn.parse("1 + 1"), node => {
  console.log(`There's a ${node.type} node at ${node.ch}`)
})

findNodeAt(node, start, end, test, base, state) tries to locate a node in a tree at the given start and/or end offsets, which satisfies the predicate test. start and end can be either null (as wildcard) or a number. test may be a string (indicating a node type) or a function that takes (nodeType, node) arguments and returns a boolean indicating whether this node is interesting. base and state are optional, and can be used to specify a custom walker. Nodes are tested from inner to outer, so if two nodes match the boundaries, the inner one will be preferred.

findNodeAround(node, pos, test, base, state) is a lot like findNodeAt, but will match any node that exists ‘around’ (spanning) the given position.

findNodeAfter(node, pos, test, base, state) is similar to findNodeAround, but will match all nodes after the given position (testing outer nodes before inner nodes).



fragment-cache NPM version NPM downloads Linux Build Status

A cache for managing namespaced sub-caches

Install

Install with npm:

$ npm install --save fragment-cache

Usage

var Fragment = require('fragment-cache');
var fragment = new Fragment();

API

FragmentCache

Create a new FragmentCache with an optional object to use for caches.

Example

var fragment = new FragmentCache();

Params

.cache

Get cache name from the fragment.caches object. Creates a new MapCache if it doesn’t already exist.

Example

var cache = fragment.cache('files');
console.log(fragment.caches.hasOwnProperty('files'));
//=> true

Params

.set

Set a value for property key on cache name

Example

fragment.set('files', 'somefile.js', new File({path: 'somefile.js'}));

Params

.has

Returns true if a non-undefined value is set for key on fragment cache name.

Example

var cache = fragment.cache('files');
cache.set('somefile.js');

console.log(cache.has('somefile.js'));
//=> true

console.log(cache.has('some-other-file.js'));
//=> false

Params

.get

Get name, or if specified, the value of key. Invokes the cache method, so that cache name will be created it doesn’t already exist. If key is not passed, the entire cache (name) is returned.

Example

var Vinyl = require('vinyl');
var cache = fragment.cache('files');
cache.set('somefile.js', new Vinyl({path: 'somefile.js'}));
console.log(cache.get('somefile.js'));
//=> <File "somefile.js">

Params

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.2.0, on October 17, 2016. # extend-shallow NPM version NPM monthly downloads NPM total downloads Linux Build Status

Extend an object with the properties of additional objects. node.js/javascript util.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save extend-shallow

Usage

var extend = require('extend-shallow');

extend({a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

Pass an empty object to shallow clone:

var obj = {};
extend(obj, {a: 'b'}, {c: 'd'})
//=> {a: 'b', c: 'd'}

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
33 jonschlinkert
1 pdehaan

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 19, 2017. # YAML

yaml is a JavaScript parser and stringifier for YAML, a human friendly data serialization standard. It supports both parsing and stringifying data using all versions of YAML, along with all common data schemas. As a particularly distinguishing feature, yaml fully supports reading and writing comments and blank lines in YAML documents.

For the purposes of versioning, any changes that break any of the endpoints or APIs documented here will be considered semver-major breaking changes. Undocumented library internals may change between minor versions, and previous APIs may be deprecated (but not removed).

For more information, see the project’s documentation site: eemeli.org/yaml

To install:

npm install yaml

Note: yaml 0.x and 1.x are rather different implementations. For the earlier yaml, see tj/js-yaml.

API Overview

The API provided by yaml has three layers, depending on how deep you need to go: Parse & Stringify, Documents, and the CST Parser. The first has the simplest API and “just works”, the second gets you all the bells and whistles supported by the library along with a decent AST, and the third is the closest to YAML source, making it fast, raw, and crude.

import YAML from 'yaml'
// or
const YAML = require('yaml')

Parse & Stringify

YAML Documents

import { Pair, YAMLMap, YAMLSeq } from 'yaml/types'

CST Parser

import parseCST from 'yaml/parse-cst'

YAML.parse

# file.yml
YAML:
  - A human-readable data serialization language
  - https://en.wikipedia.org/wiki/YAML
yaml:
  - A complete JavaScript implementation
  - https://www.npmjs.com/package/yaml
import fs from 'fs'
import YAML from 'yaml'

YAML.parse('3.14159')
// 3.14159

YAML.parse('[ true, false, maybe, null ]\n')
// [ true, false, 'maybe', null ]

const file = fs.readFileSync('./file.yml', 'utf8')
YAML.parse(file)
// { YAML:
//   [ 'A human-readable data serialization language',
//     'https://en.wikipedia.org/wiki/YAML' ],
//   yaml:
//   [ 'A complete JavaScript implementation',
//     'https://www.npmjs.com/package/yaml' ] }

YAML.stringify

import YAML from 'yaml'

YAML.stringify(3.14159)
// '3.14159\n'

YAML.stringify([true, false, 'maybe', null])
// `- true
// - false
// - maybe
// - null
// `

YAML.stringify({ number: 3, plain: 'string', block: 'two\nlines\n' })
// `number: 3
// plain: string
// block: >
//   two
//
//   lines
// `

Browser testing provided by:



is-windows NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if the platform is windows. UMD module, works with node.js, commonjs, browser, AMD, electron, etc.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-windows

Heads up!

As of v0.2.0 this module always returns a function.

Node.js usage

var isWindows = require('is-windows');

console.log(isWindows());
//=> returns true if the platform is windows

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
11 jonschlinkert
4 doowb
1 SimenB
1 gucong3000

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 14, 2018.



glob-parent

NPM version Downloads Azure Pipelines Build Status Travis Build Status AppVeyor Build Status Coveralls Status Gitter chat

Extract the non-magic parent path from a glob string.

Usage

var globParent = require('glob-parent');

globParent('path/to/*.js'); // 'path/to'
globParent('/root/path/to/*.js'); // '/root/path/to'
globParent('/*.js'); // '/'
globParent('*.js'); // '.'
globParent('**/*.js'); // '.'
globParent('path/{to,from}'); // 'path'
globParent('path/!(to|from)'); // 'path'
globParent('path/?(to|from)'); // 'path'
globParent('path/+(to|from)'); // 'path'
globParent('path/*(to|from)'); // 'path'
globParent('path/@(to|from)'); // 'path'
globParent('path/**/*'); // 'path'

// if provided a non-glob path, returns the nearest dir
globParent('path/foo/bar.js'); // 'path/foo'
globParent('path/foo/'); // 'path/foo'
globParent('path/foo'); // 'path' (see issue #3 for details)

API

globParent(maybeGlobString, [options])

Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below.

options

{
  // Disables the automatic conversion of slashes for Windows
  flipBackslashes: true
}

Escaping

The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters:

Example

globParent('foo/[bar]/') // 'foo'
globParent('foo/\\[bar]/') // 'foo/[bar]'

Limitations

Braces & Brackets

This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with expand-braces and/or expand-brackets.

Windows

Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators).

This cannot be used in conjunction with escape characters.

// BAD
globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)'

// GOOD
globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)'

If you are using escape characters for a pattern without path parts (i.e. relative to cwd), prefix with ./ to avoid confusing glob-parent.

// BAD
globParent('foo \\[bar]') // 'foo '
globParent('foo \\[bar]*') // 'foo '

// GOOD
globParent('./foo \\[bar]') // 'foo [bar]'
globParent('./foo \\[bar]*') // '.'

ISC



readable-stream

Node.js core streams for userland Build Status

NPM NPM

Sauce Test Status

npm install --save readable-stream

This package is a mirror of the streams implementations in Node.js.

Full documentation may be found on the Node.js website.

If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use readable-stream only and avoid the “stream” module in Node-core, for background see this blogpost.

As of version 2.0.0 readable-stream uses semantic versioning.

Version 3.x.x

v3.x.x of readable-stream is a cut from Node 10. This version supports Node 6, 8, and 10, as well as evergreen browsers, IE 11 and latest Safari. The breaking changes introduced by v3 are composed by the combined breaking changes in Node v9 and Node v10, as follows:

  1. Error codes: https://github.com/nodejs/node/pull/13310, https://github.com/nodejs/node/pull/13291, https://github.com/nodejs/node/pull/16589, https://github.com/nodejs/node/pull/15042, https://github.com/nodejs/node/pull/15665, https://github.com/nodejs/readable-stream/pull/344
  2. ‘readable’ have precedence over flowing https://github.com/nodejs/node/pull/18994
  3. make virtual methods errors consistent https://github.com/nodejs/node/pull/18813
  4. updated streams error handling https://github.com/nodejs/node/pull/18438
  5. writable.end should return this. https://github.com/nodejs/node/pull/18780
  6. readable continues to read when push(’’) https://github.com/nodejs/node/pull/18211
  7. add custom inspect to BufferList https://github.com/nodejs/node/pull/17907
  8. always defer ‘readable’ with nextTick https://github.com/nodejs/node/pull/17979

Version 2.x.x

v2.x.x of readable-stream is a cut of the stream module from Node 8 (there have been no semver-major changes from Node 4 to 8). This version supports all Node.js versions from 0.8, as well as evergreen browsers and IE 10 & 11.

Big Thanks

Cross-browser Testing Platform and Open Source <3 Provided by Sauce Labs



Usage

You can swap your require('stream') with require('readable-stream') without any changes, if you are just using one of the main classes and functions.

const {
  Readable,
  Writable,
  Transform,
  Duplex,
  pipeline,
  finished
} = require('readable-stream')

Note that require('stream') will return Stream, while require('readable-stream') will return Readable. We discourage using whatever is exported directly, but rather use one of the properties as shown in the example above.



Streams Working Group

readable-stream is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:

## Team Members



readable-stream

Node.js core streams for userland Build Status

NPM NPM

Sauce Test Status

npm install --save readable-stream

This package is a mirror of the streams implementations in Node.js.

Full documentation may be found on the Node.js website.

If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use readable-stream only and avoid the “stream” module in Node-core, for background see this blogpost.

As of version 2.0.0 readable-stream uses semantic versioning.

Version 3.x.x

v3.x.x of readable-stream is a cut from Node 10. This version supports Node 6, 8, and 10, as well as evergreen browsers, IE 11 and latest Safari. The breaking changes introduced by v3 are composed by the combined breaking changes in Node v9 and Node v10, as follows:

  1. Error codes: https://github.com/nodejs/node/pull/13310, https://github.com/nodejs/node/pull/13291, https://github.com/nodejs/node/pull/16589, https://github.com/nodejs/node/pull/15042, https://github.com/nodejs/node/pull/15665, https://github.com/nodejs/readable-stream/pull/344
  2. ‘readable’ have precedence over flowing https://github.com/nodejs/node/pull/18994
  3. make virtual methods errors consistent https://github.com/nodejs/node/pull/18813
  4. updated streams error handling https://github.com/nodejs/node/pull/18438
  5. writable.end should return this. https://github.com/nodejs/node/pull/18780
  6. readable continues to read when push(’’) https://github.com/nodejs/node/pull/18211
  7. add custom inspect to BufferList https://github.com/nodejs/node/pull/17907
  8. always defer ‘readable’ with nextTick https://github.com/nodejs/node/pull/17979

Version 2.x.x

v2.x.x of readable-stream is a cut of the stream module from Node 8 (there have been no semver-major changes from Node 4 to 8). This version supports all Node.js versions from 0.8, as well as evergreen browsers and IE 10 & 11.

Big Thanks

Cross-browser Testing Platform and Open Source <3 Provided by Sauce Labs



Usage

You can swap your require('stream') with require('readable-stream') without any changes, if you are just using one of the main classes and functions.

const {
  Readable,
  Writable,
  Transform,
  Duplex,
  pipeline,
  finished
} = require('readable-stream')

Note that require('stream') will return Stream, while require('readable-stream') will return Readable. We discourage using whatever is exported directly, but rather use one of the properties as shown in the example above.



Streams Working Group

readable-stream is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:

## Team Members



yallist

Yet Another Linked List

There are many doubly-linked list implementations like it, but this one is mine.

For when an array would be too big, and a Map can’t be iterated in reverse order.

Build Status Coverage Status

basic usage

var yallist = require('yallist')
var myList = yallist.create([1, 2, 3])
myList.push('foo')
myList.unshift('bar')
// of course pop() and shift() are there, too
console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo']
myList.forEach(function (k) {
  // walk the list head to tail
})
myList.forEachReverse(function (k, index, list) {
  // walk the list tail to head
})
var myDoubledList = myList.map(function (k) {
  return k + k
})
// now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo']
// mapReverse is also a thing
var myDoubledListReverse = myList.mapReverse(function (k) {
  return k + k
}) // ['foofoo', 6, 4, 2, 'barbar']

var reduced = myList.reduce(function (set, entry) {
  set += entry
  return set
}, 'start')
console.log(reduced) // 'startfoo123bar'

api

The whole API is considered “public”.

Functions with the same name as an Array method work more or less the same way.

There’s reverse versions of most things because that’s the point.

Yallist

Default export, the class that holds and manages a list.

Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list.

The Array-ish methods all act like you’d expect. No magic length, though, so if you change that it won’t automatically prune or add empty spots.

Yallist.create(..)

Alias for Yallist function. Some people like factories.

yallist.head

The first node in the list

yallist.tail

The last node in the list

yallist.length

The number of nodes in the list. (Change this at your peril. It is not magic like Array length.)

yallist.toArray()

Convert the list to an array.

yallist.forEach(fn, [thisp])

Call a function on each item in the list.

yallist.forEachReverse(fn, [thisp])

Call a function on each item in the list, in reverse order.

yallist.get(n)

Get the data at position n in the list. If you use this a lot, probably better off just using an Array.

yallist.getReverse(n)

Get the data at position n, counting from the tail.

yallist.map(fn, thisp)

Create a new Yallist with the result of calling the function on each item.

yallist.mapReverse(fn, thisp)

Same as map, but in reverse.

yallist.pop()

Get the data from the list tail, and remove the tail from the list.

yallist.push(item, …)

Insert one or more items to the tail of the list.

yallist.reduce(fn, initialValue)

Like Array.reduce.

yallist.reduceReverse

Like Array.reduce, but in reverse.

yallist.reverse

Reverse the list in place.

yallist.shift()

Get the data from the list head, and remove the head from the list.

yallist.slice(from, to)

Just like Array.slice, but returns a new Yallist.

yallist.sliceReverse(from, to)

Just like yallist.slice, but the result is returned in reverse.

yallist.toArray()

Create an array representation of the list.

yallist.toArrayReverse()

Create a reversed array representation of the list.

yallist.unshift(item, …)

Insert one or more items to the head of the list.

yallist.unshiftNode(node)

Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.)

If the node belongs to a different list, then that list will remove it first.

yallist.pushNode(node)

Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.)

If the node belongs to a list already, then that list will remove it first.

yallist.removeNode(node)

Remove a node from the list, preserving referential integrity of head and tail and other nodes.

Will throw an error if you try to have a list remove a node that doesn’t belong to it.

Yallist.Node

The class that holds the data and is actually the list.

Call with var n = new Node(value, previousNode, nextNode)

Note that if you do direct operations on Nodes themselves, it’s very easy to get into weird states where the list is broken. Be careful :)

node.next

The next node in the list.

node.prev

The previous node in the list.

node.value

The data the node contains.

node.list

The list to which this node belongs. (Null if it does not belong to any list.)



cross-spawn

NPM version Downloads Build Status Build status Coverage Status Dependency status Dev Dependency status

A cross platform solution to node’s spawn and spawnSync.

Installation

Node.js version 8 and up: npm install cross-spawn

Node.js version 7 and under: npm install cross-spawn@6

Why

Node has issues when using spawn on Windows:

All these issues are handled correctly by cross-spawn. There are some known modules, such as win-spawn, that try to solve this but they are either broken or provide faulty escaping of shell arguments.

Usage

Exactly the same way as node’s spawn or spawnSync, so it’s a drop in replacement.

const spawn = require('cross-spawn');

// Spawn NPM asynchronously
const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' });

// Spawn NPM synchronously
const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' });

Caveats

Using options.shell as an alternative to cross-spawn

Starting from node v4.8, spawn has a shell option that allows you run commands from within a shell. This new option solves the PATHEXT issue but:

If you are using the shell option to spawn a command in a cross platform way, consider using cross-spawn instead. You have been warned.

options.shell support

While cross-spawn adds support for options.shell in node <v4.8, all of its enhancements are disabled.

This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using options.shell you are probably targeting a specific platform anyway and you don’t want things to get into your way.

Shebangs support

While cross-spawn handles shebangs on Windows, its support is limited. More specifically, it just supports #!/usr/bin/env <program> where <program> must not contain any arguments.
If you would like to have the shebang support improved, feel free to contribute via a pull-request.

Remember to always test your code on Windows!

Tests

npm test
npm test -- --watch during development



is-absolute NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a file path is absolute. Does not rely on the path module and can be used as a polyfill for node.js native path.isAbolute.

Install

Install with npm:

$ npm install --save is-absolute

Originally based on the isAbsolute utility method in express.

Usage

var isAbsolute = require('is-absolute');

isAbsolute('a/b/c.js');
//=> 'false'
isAbsolute('/a/b/c.js');
//=> 'true'

Explicitly test windows paths

isAbsolute.posix('/foo/bar');
isAbsolute.posix('/user/docs/Letter.txt');
//=> true

isAbsolute.posix('foo/bar');
//=> false

Explicitly test windows paths

var isAbsolute = require('is-absolute');

isAbsolute.win32('c:\\');
isAbsolute.win32('//C://user\\docs\\Letter.txt');
isAbsolute.win32('\\\\unc\\share');
isAbsolute.win32('\\\\unc\\share\\foo');
isAbsolute.win32('\\\\unc\\share\\foo\\');
isAbsolute.win32('\\\\unc\\share\\foo\\bar');
isAbsolute.win32('\\\\unc\\share\\foo\\bar\\');
isAbsolute.win32('\\\\unc\\share\\foo\\bar\\baz');
//=> true

isAbsolute.win32('a:foo/a/b/c/d');
isAbsolute.win32(':\\');
isAbsolute.win32('foo\\bar\\baz');
isAbsolute.win32('foo\\bar\\baz\\');
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
35 jonschlinkert
1 es128
1 shinnn
1 Sobak

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 13, 2017. https-proxy-agent ================ ### An HTTP(s) proxy http.Agent implementation for HTTPS Build Status

This module provides an http.Agent implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in https module.

Specifically, this Agent implementation connects to an intermediary “proxy” server and issues the CONNECT HTTP method, which tells the proxy to open a direct TCP connection to the destination server.

Since this agent implements the CONNECT HTTP method, it also works with other protocols that use this method when connecting over proxies (i.e. WebSockets). See the “Examples” section below for more.

Installation

Install with npm:

$ npm install https-proxy-agent

Examples

https module example

var url = require('url');
var https = require('https');
var HttpsProxyAgent = require('https-proxy-agent');

// HTTP/HTTPS proxy to connect to
var proxy = process.env.http_proxy || 'http://168.63.76.32:3128';
console.log('using proxy server %j', proxy);

// HTTPS endpoint for the proxy to connect to
var endpoint = process.argv[2] || 'https://graph.facebook.com/tootallnate';
console.log('attempting to GET %j', endpoint);
var options = url.parse(endpoint);

// create an instance of the `HttpsProxyAgent` class with the proxy server information
var agent = new HttpsProxyAgent(proxy);
options.agent = agent;

https.get(options, function (res) {
  console.log('"response" event!', res.headers);
  res.pipe(process.stdout);
});

ws WebSocket connection example

var url = require('url');
var WebSocket = require('ws');
var HttpsProxyAgent = require('https-proxy-agent');

// HTTP/HTTPS proxy to connect to
var proxy = process.env.http_proxy || 'http://168.63.76.32:3128';
console.log('using proxy server %j', proxy);

// WebSocket endpoint for the proxy to connect to
var endpoint = process.argv[2] || 'ws://echo.websocket.org';
var parsed = url.parse(endpoint);
console.log('attempting to connect to WebSocket %j', endpoint);

// create an instance of the `HttpsProxyAgent` class with the proxy server information
var options = url.parse(proxy);

var agent = new HttpsProxyAgent(options);

// finally, initiate the WebSocket connection
var socket = new WebSocket(endpoint, { agent: agent });

socket.on('open', function () {
  console.log('"open" event!');
  socket.send('hello world');
});

socket.on('message', function (data, flags) {
  console.log('"message" event! %j %j', data, flags);
  socket.close();
});

API

new HttpsProxyAgent(Object options)

The HttpsProxyAgent class implements an http.Agent subclass that connects to the specified “HTTP(s) proxy server” in order to proxy HTTPS and/or WebSocket requests. This is achieved by using the HTTP CONNECT method.

The options argument may either be a string URI of the proxy server to use, or an “options” object with more specific properties:



iMurmurHash.js

An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on Gary Court’s implementation with kazuyukitanimura’s modifications.

This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing.

Installation

To use iMurmurHash in the browser, download the latest version and include it as a script on your site.

<script type="text/javascript" src="/scripts/imurmurhash.min.js"></script>
<script>
// Your code here, access iMurmurHash using the global object MurmurHash3
</script>

To use iMurmurHash in Node.js, install the module using NPM:

npm install imurmurhash

Then simply include it in your scripts:

MurmurHash3 = require('imurmurhash');

Quick Example

// Create the initial hash
var hashState = MurmurHash3('string');

// Incrementally add text
hashState.hash('more strings');
hashState.hash('even more strings');

// All calls can be chained if desired
hashState.hash('and').hash('some').hash('more');

// Get a result
hashState.result();
// returns 0xe4ccfe6b

Functions

MurmurHash3 (string, [seed])

Get a hash state object, optionally initialized with the given string and seed. Seed must be a positive integer if provided. Calling this function without the new keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use new to create a new state object. For example:

// Use the cached object, calling the function again will return the same
// object (but reset, so the current state would be lost)
hashState = MurmurHash3();
...

// Create a new object that can be safely used however you wish. Calling the
// function again will simply return a new state object, and no state loss
// will occur, at the cost of creating more objects.
hashState = new MurmurHash3();

Both methods can be mixed however you like if you have different use cases.


MurmurHash3.prototype.hash (string)

Incrementally add string to the hash. This can be called as many times as you want for the hash state object, including after a call to result(). Returns this so calls can be chained.


MurmurHash3.prototype.result ()

// Do the whole string at once
MurmurHash3('this is a test string').result();
// 0x70529328

// Do part of the string, get a result, then the other part
var m = MurmurHash3('this is a');
m.result();
// 0xbfc4f834
m.hash(' test string').result();
// 0x70529328 (same as above)

MurmurHash3.prototype.reset ([seed])

Reset the state object for reuse, optionally using the given seed (defaults to 0 like the constructor). Returns this so calls can be chained.




assert-plus

This library is a super small wrapper over node’s assert module that has two things: (1) the ability to disable assertions with the environment variable NODE_NDEBUG, and (2) some API wrappers for argument testing. Like assert.string(myArg, 'myArg'). As a simple example, most of my code looks like this:

    var assert = require('assert-plus');

    function fooAccount(options, callback) {
        assert.object(options, 'options');
        assert.number(options.id, 'options.id');
        assert.bool(options.isManager, 'options.isManager');
        assert.string(options.name, 'options.name');
        assert.arrayOfString(options.email, 'options.email');
        assert.func(callback, 'callback');

        // Do stuff
        callback(null, {});
    }


API

All methods that aren’t part of node’s core assert API are simply assumed to take an argument, and then a string ‘name’ that’s not a message; AssertionError will be thrown if the assertion fails with a message like:

AssertionError: foo (string) is required
at test (/home/mark/work/foo/foo.js:3:9)
at Object.<anonymous> (/home/mark/work/foo/foo.js:15:1)
at Module._compile (module.js:446:26)
at Object..js (module.js:464:10)
at Module.load (module.js:353:31)
at Function._load (module.js:311:12)
at Array.0 (module.js:484:10)
at EventEmitter._tickCallback (node.js:190:38)

from:

    function test(foo) {
        assert.string(foo, 'foo');
    }

There you go. You can check that arrays are of a homogeneous type with Arrayof$Type:

    function test(foo) {
        assert.arrayOfString(foo, 'foo');
    }

You can assert IFF an argument is not undefined (i.e., an optional arg):

    assert.optionalString(foo, 'foo');

Lastly, you can opt-out of assertion checking altogether by setting the environment variable NODE_NDEBUG=1. This is pseudo-useful if you have lots of assertions, and don’t want to pay typeof () taxes to v8 in production. Be advised: The standard functions re-exported from assert are also disabled in assert-plus if NDEBUG is specified. Using them directly from the assert module avoids this behavior.

The complete list of APIs is:



Installation

npm install assert-plus

Bugs

See https://github.com/mcavage/node-assert-plus/issues.



negotiator

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

An HTTP content negotiator for Node.js

Installation

$ npm install negotiator

API

var Negotiator = require('negotiator')

Accept Negotiation

availableMediaTypes = ['text/html', 'text/plain', 'application/json']

// The negotiator constructor receives a request object
negotiator = new Negotiator(request)

// Let's say Accept header is 'text/html, application/*;q=0.2, image/jpeg;q=0.8'

negotiator.mediaTypes()
// -> ['text/html', 'image/jpeg', 'application/*']

negotiator.mediaTypes(availableMediaTypes)
// -> ['text/html', 'application/json']

negotiator.mediaType(availableMediaTypes)
// -> 'text/html'

You can check a working example at examples/accept.js.

Methods

mediaType()

Returns the most preferred media type from the client.

mediaType(availableMediaType)

Returns the most preferred media type from a list of available media types.

mediaTypes()

Returns an array of preferred media types ordered by the client preference.

mediaTypes(availableMediaTypes)

Returns an array of preferred media types ordered by priority from a list of available media types.

Accept-Language Negotiation

negotiator = new Negotiator(request)

availableLanguages = ['en', 'es', 'fr']

// Let's say Accept-Language header is 'en;q=0.8, es, pt'

negotiator.languages()
// -> ['es', 'pt', 'en']

negotiator.languages(availableLanguages)
// -> ['es', 'en']

language = negotiator.language(availableLanguages)
// -> 'es'

You can check a working example at examples/language.js.

Methods

language()

Returns the most preferred language from the client.

language(availableLanguages)

Returns the most preferred language from a list of available languages.

languages()

Returns an array of preferred languages ordered by the client preference.

languages(availableLanguages)

Returns an array of preferred languages ordered by priority from a list of available languages.

Accept-Charset Negotiation

availableCharsets = ['utf-8', 'iso-8859-1', 'iso-8859-5']

negotiator = new Negotiator(request)

// Let's say Accept-Charset header is 'utf-8, iso-8859-1;q=0.8, utf-7;q=0.2'

negotiator.charsets()
// -> ['utf-8', 'iso-8859-1', 'utf-7']

negotiator.charsets(availableCharsets)
// -> ['utf-8', 'iso-8859-1']

negotiator.charset(availableCharsets)
// -> 'utf-8'

You can check a working example at examples/charset.js.

Methods

charset()

Returns the most preferred charset from the client.

charset(availableCharsets)

Returns the most preferred charset from a list of available charsets.

charsets()

Returns an array of preferred charsets ordered by the client preference.

charsets(availableCharsets)

Returns an array of preferred charsets ordered by priority from a list of available charsets.

Accept-Encoding Negotiation

availableEncodings = ['identity', 'gzip']

negotiator = new Negotiator(request)

// Let's say Accept-Encoding header is 'gzip, compress;q=0.2, identity;q=0.5'

negotiator.encodings()
// -> ['gzip', 'identity', 'compress']

negotiator.encodings(availableEncodings)
// -> ['gzip', 'identity']

negotiator.encoding(availableEncodings)
// -> 'gzip'

You can check a working example at examples/encoding.js.

Methods

encoding()

Returns the most preferred encoding from the client.

encoding(availableEncodings)

Returns the most preferred encoding from a list of available encodings.

encodings()

Returns an array of preferred encodings ordered by the client preference.

encodings(availableEncodings)

Returns an array of preferred encodings ordered by priority from a list of available encodings.

See Also

The accepts module builds on this module and provides an alternative interface, mime type validation, and more.



define-property NPM version NPM monthly downloads NPM total downloads Linux Build Status

Define a non-enumerable property on an object. Uses Reflect.defineProperty when available, otherwise Object.defineProperty.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save define-property

Release history

See the CHANGELOG for updates.

Usage

Params

var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
  return val.toUpperCase();
});

// by default, defined properties are non-enumberable
console.log(obj);
//=> {}

console.log(obj.foo('bar'));
//=> 'BAR'

defining setters/getters

Pass the same properties you would if using Object.defineProperty or Reflect.defineProperty.

define(obj, 'foo', {
  set: function() {},
  get: function() {}
});

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
28 jonschlinkert
1 doowb

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on January 25, 2018. //: # “This README.md file is auto-generated, all changes to this file will be lost.” //: # “To regenerate it, use python -m synthtool.” Google Cloud Platform logo



Google Cloud Common Projectify: Node.js Client

release level npm version codecov

A simple utility for replacing the projectid token in objects.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Installing the client library

npm install @google-cloud/projectify

Using the client library

const {replaceProjectIdToken} = require('@google-cloud/projectify');
const options = {
  projectId: '{{projectId}}',
};
replaceProjectIdToken(options, 'fake-project-id');

Samples

Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.

Sample Source Code Try it
Quickstart source code Open in Cloud Shell

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

Legacy Node.js versions are supported as a best effort:

Legacy tags available

Versioning

This library follows Semantic Versioning.

This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

Apache Version 2.0

See LICENSE



gaxios

npm version codecov Code Style: Google

An HTTP request client that provides an axios like interface over top of node-fetch.

Install

$ npm install gaxios

Example

const {request} = require('gaxios');
const res = await request({
  url: 'https://www.googleapis.com/discovery/v1/apis/'
});

Setting Defaults

Gaxios supports setting default properties both on the default instance, and on additional instances. This is often useful when making many requests to the same domain with the same base settings. For example:

const gaxios = require('gaxios');
gaxios.instance.defaults = {
  baseURL: 'https://example.com'
  headers: {
    Authorization: 'SOME_TOKEN'
  }
}
gaxios.request({url: '/data'}).then(...);

Request Options

{
  // The url to which the request should be sent.  Required.
  url: string,

  // The HTTP method to use for the request.  Defaults to `GET`.
  method: 'GET',

  // The base Url to use for the request. Prepended to the `url` property above.
  baseURL: 'https://example.com';

  // The HTTP methods to be sent with the request.
  headers: { 'some': 'header' },

  // The data to send in the body of the request. Data objects will be
  // serialized as JSON.
  //
  // Note: if you would like to provide a Content-Type header other than
  // application/json you you must provide a string or readable stream, rather
  // than an object:
  // data: JSON.stringify({some: 'data'})
  // data: fs.readFile('./some-data.jpeg')
  data: {
    some: 'data'
  },

  // The max size of the http response content in bytes allowed.
  // Defaults to `0`, which is the same as unset.
  maxContentLength: 2000,

  // The max number of HTTP redirects to follow.
  // Defaults to 100.
  maxRedirects: 100,

  // The querystring parameters that will be encoded using `qs` and
  // appended to the url
  params: {
    querystring: 'parameters'
  },

  // By default, we use the `querystring` package in node core to serialize
  // querystring parameters.  You can override that and provide your
  // own implementation.
  paramsSerializer: (params) => {
    return qs.stringify(params);
  },

  // The timeout for the HTTP request. Defaults to 0.
  timeout: 1000,

  // Optional method to override making the actual HTTP request. Useful
  // for writing tests and instrumentation
  adapter?: async (options, defaultAdapter) => {
    const res = await defaultAdapter(options);
    res.data = {
      ...res.data,
      extraProperty: 'your extra property',
    };
    return res;
  };

  // The expected return type of the request.  Options are:
  // json | stream | blob | arraybuffer | text
  // Defaults to `json`.
  responseType: 'json',

  // The node.js http agent to use for the request.
  agent: someHttpsAgent,

  // Custom function to determine if the response is valid based on the
  // status code.  Defaults to (>= 200 && < 300)
  validateStatus: (status: number) => true,

  // Implementation of `fetch` to use when making the API call. By default,
  // will use the browser context if available, and fall back to `node-fetch`
  // in node.js otherwise.
  fetchImplementation?: typeof fetch;

  // Configuration for retrying of requests.
  retryConfig: {
    // The number of times to retry the request.  Defaults to 3.
    retry?: number;

    // The number of retries already attempted.
    currentRetryAttempt?: number;

    // The HTTP Methods that will be automatically retried.
    // Defaults to ['GET','PUT','HEAD','OPTIONS','DELETE']
    httpMethodsToRetry?: string[];

    // The HTTP response status codes that will automatically be retried.
    // Defaults to: [[100, 199], [429, 429], [500, 599]]
    statusCodesToRetry?: number[][];

    // Function to invoke when a retry attempt is made.
    onRetryAttempt?: (err: GaxiosError) => Promise<void> | void;

    // Function to invoke which determines if you should retry
    shouldRetry?: (err: GaxiosError) => Promise<boolean> | boolean;

    // When there is no response, the number of retries to attempt. Defaults to 2.
    noResponseRetries?: number;

    // The amount of time to initially delay the retry, in ms.  Defaults to 100ms.
    retryDelay?: number;
  },

  // Enables default configuration for retries.
  retry: boolean,

  // Cancelling a request requires the `abort-controller` library.
  // See https://github.com/bitinn/node-fetch#request-cancellation-with-abortsignal
  signal?: AbortSignal
}

Apache-2.0



sprintf.js

sprintf.js is a complete open source JavaScript sprintf implementation for the browser and node.js.

Its prototype is simple:

string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]])

The placeholders in the format string are marked by % and are followed by one or more of these elements, in this order:

JavaScript vsprintf

vsprintf is the same as sprintf except that it accepts an array of arguments, rather than a variable number of arguments:

vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"])

Argument swapping

You can also swap the arguments. That is, the order of the placeholders doesn’t have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to:

sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants")

And, of course, you can repeat the placeholders without having to increase the number of arguments.

Named arguments

Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - ( and ) - and begin with a keyword that refers to a key:

var user = {
    name: "Dolly"
}
sprintf("Hello %(name)s", user) // Hello Dolly

Keywords in replacement fields can be optionally followed by any number of keywords or indexes:

var users = [
    {name: "Dolly"},
    {name: "Molly"},
    {name: "Polly"}
]
sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly

Note: mixing positional and named placeholders is not (yet) supported

Computed values

You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly.

sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890
sprintf("Current date and time: %s", function() { return new Date().toString() })


AngularJS

You can now use sprintf and vsprintf (also aliased as fmt and vfmt respectively) in your AngularJS projects. See demo/.



Installation

Via Bower

bower install sprintf

Or as a node.js module

npm install sprintf-js

Usage

var sprintf = require("sprintf-js").sprintf,
    vsprintf = require("sprintf-js").vsprintf

sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants")
vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"])


Estraverse Build Status

Estraverse (estraverse) is ECMAScript traversal functions from esmangle project.

Documentation

You can find usage docs at wiki page.

Example Usage

The following code will output all variables declared at the root of a file.

estraverse.traverse(ast, {
    enter: function (node, parent) {
        if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration')
            return estraverse.VisitorOption.Skip;
    },
    leave: function (node, parent) {
        if (node.type == 'VariableDeclarator')
          console.log(node.id.name);
    }
});

We can use this.skip, this.remove and this.break functions instead of using Skip, Remove and Break.

estraverse.traverse(ast, {
    enter: function (node) {
        this.break();
    }
});

And estraverse provides estraverse.replace function. When returning node from enter/leave, current node is replaced with it.

result = estraverse.replace(tree, {
    enter: function (node) {
        // Replace it with replaced.
        if (node.type === 'Literal')
            return replaced;
    }
});

By passing visitor.keys mapping, we can extend estraverse traversing functionality.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Extending the existing traversing rules.
    keys: {
        // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
        TestExpression: ['argument']
    }
});

By passing visitor.fallback option, we can control the behavior when encountering unknown nodes.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Iterating the child **nodes** of unknown nodes.
    fallback: 'iteration'
});

When visitor.fallback is a function, we can determine which keys to visit on each node.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Skip the `argument` property of each node
    fallback: function(node) {
        return Object.keys(node).filter(function(key) {
            return key !== 'argument';
        });
    }
});

Estraverse Build Status

Estraverse (estraverse) is ECMAScript traversal functions from esmangle project.

Documentation

You can find usage docs at wiki page.

Example Usage

The following code will output all variables declared at the root of a file.

estraverse.traverse(ast, {
    enter: function (node, parent) {
        if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration')
            return estraverse.VisitorOption.Skip;
    },
    leave: function (node, parent) {
        if (node.type == 'VariableDeclarator')
          console.log(node.id.name);
    }
});

We can use this.skip, this.remove and this.break functions instead of using Skip, Remove and Break.

estraverse.traverse(ast, {
    enter: function (node) {
        this.break();
    }
});

And estraverse provides estraverse.replace function. When returning node from enter/leave, current node is replaced with it.

result = estraverse.replace(tree, {
    enter: function (node) {
        // Replace it with replaced.
        if (node.type === 'Literal')
            return replaced;
    }
});

By passing visitor.keys mapping, we can extend estraverse traversing functionality.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Extending the existing traversing rules.
    keys: {
        // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
        TestExpression: ['argument']
    }
});

By passing visitor.fallback option, we can control the behavior when encountering unknown nodes.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Iterating the child **nodes** of unknown nodes.
    fallback: 'iteration'
});

When visitor.fallback is a function, we can determine which keys to visit on each node.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Skip the `argument` property of each node
    fallback: function(node) {
        return Object.keys(node).filter(function(key) {
            return key !== 'argument';
        });
    }
});

Estraverse Build Status

Estraverse (estraverse) is ECMAScript traversal functions from esmangle project.

Documentation

You can find usage docs at wiki page.

Example Usage

The following code will output all variables declared at the root of a file.

estraverse.traverse(ast, {
    enter: function (node, parent) {
        if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration')
            return estraverse.VisitorOption.Skip;
    },
    leave: function (node, parent) {
        if (node.type == 'VariableDeclarator')
          console.log(node.id.name);
    }
});

We can use this.skip, this.remove and this.break functions instead of using Skip, Remove and Break.

estraverse.traverse(ast, {
    enter: function (node) {
        this.break();
    }
});

And estraverse provides estraverse.replace function. When returning node from enter/leave, current node is replaced with it.

result = estraverse.replace(tree, {
    enter: function (node) {
        // Replace it with replaced.
        if (node.type === 'Literal')
            return replaced;
    }
});

By passing visitor.keys mapping, we can extend estraverse traversing functionality.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Extending the existing traversing rules.
    keys: {
        // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
        TestExpression: ['argument']
    }
});

By passing visitor.fallback option, we can control the behavior when encountering unknown nodes.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Iterating the child **nodes** of unknown nodes.
    fallback: 'iteration'
});

When visitor.fallback is a function, we can determine which keys to visit on each node.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
estraverse.traverse(tree, {
    enter: function (node) { },

    // Skip the `argument` property of each node
    fallback: function(node) {
        return Object.keys(node).filter(function(key) {
            return key !== 'argument';
        });
    }
});


on-finished

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Execute a callback when a HTTP request closes, finishes, or errors.

Install

$ npm install on-finished

API

var onFinished = require('on-finished')

onFinished(res, listener)

Attach a listener to listen for the response to finish. The listener will be invoked only once when the response finished. If the response finished to an error, the first argument will contain the error. If the response has already finished, the listener will be invoked.

Listening to the end of a response would be used to close things associated with the response, like open files.

Listener is invoked as listener(err, res).

onFinished(res, function (err, res) {
  // clean up open fds, etc.
  // err contains the error is request error'd
})

onFinished(req, listener)

Attach a listener to listen for the request to finish. The listener will be invoked only once when the request finished. If the request finished to an error, the first argument will contain the error. If the request has already finished, the listener will be invoked.

Listening to the end of a request would be used to know when to continue after reading the data.

Listener is invoked as listener(err, req).

var data = ''

req.setEncoding('utf8')
res.on('data', function (str) {
  data += str
})

onFinished(req, function (err, req) {
  // data is read unless there is err
})

onFinished.isFinished(res)

Determine if res is already finished. This would be useful to check and not even start certain operations if the response has already finished.

onFinished.isFinished(req)

Determine if req is already finished. This would be useful to check and not even start certain operations if the request has already finished.

Special Node.js requests

HTTP CONNECT method

The meaning of the CONNECT method from RFC 7231, section 4.3.6:

The CONNECT method requests that the recipient establish a tunnel to the destination origin server identified by the request-target and, if successful, thereafter restrict its behavior to blind forwarding of packets, in both directions, until the tunnel is closed. Tunnels are commonly used to create an end-to-end virtual connection, through one or more proxies, which can then be secured using TLS (Transport Layer Security, [RFC5246]).

In Node.js, these request objects come from the 'connect' event on the HTTP server.

When this module is used on a HTTP CONNECT request, the request is considered “finished” immediately, due to limitations in the Node.js interface. This means if the CONNECT request contains a request entity, the request will be considered “finished” even before it has been read.

There is no such thing as a response object to a CONNECT request in Node.js, so there is no support for for one.

HTTP Upgrade request

The meaning of the Upgrade header from RFC 7230, section 6.1:

The “Upgrade” header field is intended to provide a simple mechanism for transitioning from HTTP/1.1 to some other protocol on the same connection.

In Node.js, these request objects come from the 'upgrade' event on the HTTP server.

When this module is used on a HTTP request with an Upgrade header, the request is considered “finished” immediately, due to limitations in the Node.js interface. This means if the Upgrade request contains a request entity, the request will be considered “finished” even before it has been read.

There is no such thing as a response object to a Upgrade request in Node.js, so there is no support for for one.

Example

The following code ensures that file descriptors are always closed once the response finishes.

var destroy = require('destroy')
var http = require('http')
var onFinished = require('on-finished')

http.createServer(function onRequest(req, res) {
  var stream = fs.createReadStream('package.json')
  stream.pipe(res)
  onFinished(res, function (err) {
    destroy(stream)
  })
})


Punycode.js Build status Code coverage status Dependency status

Punycode.js is a robust Punycode converter that fully complies to RFC 3492 and RFC 5891.

This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:

This project was bundled with Node.js from v0.6.2+ until v7 (soft-deprecated).

The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see v1.4.1.

Installation

Via npm:

npm install punycode --save

In Node.js:

const punycode = require('punycode');

API

punycode.decode(string)

Converts a Punycode string of ASCII symbols to a string of Unicode symbols.

// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'

punycode.encode(string)

Converts a string of Unicode symbols to a Punycode string of ASCII symbols.

// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'

punycode.toUnicode(input)

Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode.

// decode domain names
punycode.toUnicode('xn--maana-pta.com');
// → 'mañana.com'
punycode.toUnicode('xn----dqo34k.com');
// → '☃-⌘.com'

// decode email addresses
punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq');
// → 'джумла@джpумлатест.bрфa'

punycode.toASCII(input)

Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII.

// encode domain names
punycode.toASCII('mañana.com');
// → 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com');
// → 'xn----dqo34k.com'

// encode email addresses
punycode.toASCII('джумла@джpумлатест.bрфa');
// → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'

punycode.ucs2

punycode.ucs2.decode(string)

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]

punycode.ucs2.encode(codePoints)

Creates a string based on an array of numeric code point values.

punycode.ucs2.encode([0x61, 0x62, 0x63]);
// → 'abc'
punycode.ucs2.encode([0x1D306]);
// → '\uD834\uDF06'

punycode.version

A string representing the current Punycode.js version number.

Author

twitter/mathias
Mathias Bynens


map-visit NPM version NPM monthly downloads NPM total downloads Linux Build Status

Map visit over an array of objects.

Install

Install with npm:

$ npm install --save map-visit

Usage

var mapVisit = require('map-visit');

What does this do?

Assign/Merge/Extend vs. Visit

Let’s say you want to add a set method to your application that will:

Example using extend

Here is one way to accomplish this using Lo-Dash’s extend (comparable to Object.assign):

var _ = require('lodash');

var obj = {
  data: {},
  set: function (key, value) {
    if (Array.isArray(key)) {
      _.extend.apply(_, [obj.data].concat(key));
    } else if (typeof key === 'object') {
      _.extend(obj.data, key);
    } else {
      obj.data[key] = value;
    }
  }
};

obj.set('a', 'a');
obj.set([{b: 'b'}, {c: 'c'}]);
obj.set({d: {e: 'f'}});

console.log(obj.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }}

The above approach works fine for most use cases. However, if you also want to emit an event each time a property is added to the data object, or you want more control over what happens as the object is extended, a better approach would be to use visit.

Example using visit

In this approach:

As a result, the data event will be emitted every time a property is added to data (events are just an example, you can use this approach to perform any necessary logic every time the method is called).

var mapVisit = require('map-visit');
var visit = require('object-visit');

var obj = {
  data: {},
  set: function (key, value) {
    if (Array.isArray(key)) {
      mapVisit(obj, 'set', key);
    } else if (typeof key === 'object') {
      visit(obj, 'set', key);
    } else {
      // simulate an event-emitter
      console.log('emit', key, value);
      obj.data[key] = value;
    }
  }
};

obj.set('a', 'a');
obj.set([{b: 'b'}, {c: 'c'}]);
obj.set({d: {e: 'f'}});
obj.set({g: 'h', i: 'j', k: 'l'});

console.log(obj.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }, g: 'h', i: 'j', k: 'l'}

// events would look something like:
// emit a a
// emit b b
// emit c c
// emit d { e: 'f' }
// emit g h
// emit i j
// emit k l

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
15 jonschlinkert
7 doowb

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.5.0, on April 09, 2017. # @nodelib/fs.scandir

List files and directories inside the specified directory.

:bulb: Highlights

The package is aimed at obtaining information about entries in the directory.

Install

npm install @nodelib/fs.scandir

Usage

import * as fsScandir from '@nodelib/fs.scandir';

fsScandir.scandir('path', (error, stats) => { /* … */ });

API

.scandir(path, optionsOrSettings, callback)

Returns an array of plain objects (Entry) with information about entry for provided path with standard callback-style.

fsScandir.scandir('path', (error, entries) => { /* … */ });
fsScandir.scandir('path', {}, (error, entries) => { /* … */ });
fsScandir.scandir('path', new fsScandir.Settings(), (error, entries) => { /* … */ });

.scandirSync(path, optionsOrSettings)

Returns an array of plain objects (Entry) with information about entry for provided path.

const entries = fsScandir.scandirSync('path');
const entries = fsScandir.scandirSync('path', {});
const entries = fsScandir.scandirSync(('path', new fsScandir.Settings());

path

A path to a file. If a URL is provided, it must use the file: protocol.

optionsOrSettings

An Options object or an instance of Settings class.

:book: When you pass a plain object, an instance of the Settings class will be created automatically. If you plan to call the method frequently, use a pre-created instance of the Settings class.

Settings(options)

A class of full settings of the package.

const settings = new fsScandir.Settings({ followSymbolicLinks: false });

const entries = fsScandir.scandirSync('path', settings);

Entry

For example, the scandir call for tools directory with one directory inside:

{
    dirent: Dirent { name: 'typedoc', /* … */ },
    name: 'typedoc',
    path: 'tools/typedoc'
}

Options

stats

Adds an instance of fs.Stats class to the Entry.

:book: Always use fs.readdir without the withFileTypes option. ??TODO??

Follow symbolic links or not. Call fs.stat on symbolic link if true.

Throw an error when symbolic link is broken if true or safely use lstat call if false.

pathSegmentSeparator

By default, this package uses the correct path separator for your OS (\ on Windows, / on Unix-like systems). But you can set this option to any separator character(s) that you want to use instead.

fs

By default, the built-in Node.js module (fs) is used to work with the file system. You can replace any method with your own.

interface FileSystemAdapter {
    lstat?: typeof fs.lstat;
    stat?: typeof fs.stat;
    lstatSync?: typeof fs.lstatSync;
    statSync?: typeof fs.statSync;
    readdir?: typeof fs.readdir;
    readdirSync?: typeof fs.readdirSync;
}

const settings = new fsScandir.Settings({
    fs: { lstat: fakeLstat }
});

old and modern mode

This package has two modes that are used depending on the environment and parameters of use.

old

When working in the old mode, the directory is read first (fs.readdir), then the type of entries is determined (fs.lstat and/or fs.stat for symbolic links).

modern

In the modern mode, reading the directory (fs.readdir with the withFileTypes option) is combined with obtaining information about its entries. An additional call for symbolic links (fs.stat) is still present.

This mode makes fewer calls to the file system. It’s faster.

Changelog

See the Releases section of our GitHub project for changelog for each release version.



agent-base

Turn a function into an http.Agent instance

Build Status

This module provides an http.Agent generator. That is, you pass it an async callback function, and it returns a new http.Agent instance that will invoke the given callback function when sending outbound HTTP requests.

Some subclasses:

Here’s some more interesting uses of agent-base. Send a pull request to list yours!

Installation

Install with npm:

$ npm install agent-base

Example

Here’s a minimal example that creates a new net.Socket connection to the server for every HTTP request (i.e. the equivalent of agent: false option):

var net = require('net');
var tls = require('tls');
var url = require('url');
var http = require('http');
var agent = require('agent-base');

var endpoint = 'http://nodejs.org/api/';
var parsed = url.parse(endpoint);

// This is the important part!
parsed.agent = agent(function (req, opts) {
  var socket;
  // `secureEndpoint` is true when using the https module
  if (opts.secureEndpoint) {
    socket = tls.connect(opts);
  } else {
    socket = net.connect(opts);
  }
  return socket;
});

// Everything else works just like normal...
http.get(parsed, function (res) {
  console.log('"response" event!', res.headers);
  res.pipe(process.stdout);
});

Returning a Promise or using an async function is also supported:

agent(async function (req, opts) {
  await sleep(1000);
  // etc…
});

Return another http.Agent instance to “pass through” the responsibility for that HTTP request to that agent:

agent(function (req, opts) {
  return opts.secureEndpoint ? https.globalAgent : http.globalAgent;
});

API

Agent(Function callback[, Object options]) → http.Agent

Creates a base http.Agent that will execute the callback function callback for every HTTP request that it is used as the agent for. The callback function is responsible for creating a stream.Duplex instance of some kind that will be used as the underlying socket in the HTTP request.

The options object accepts the following properties:

The callback function should have the following signature:

callback(http.ClientRequest req, Object options, Function cb) → undefined

The ClientRequest req can be accessed to read request headers and and the path, etc. The options object contains the options passed to the http.request()/https.request() function call, and is formatted to be directly passed to net.connect()/tls.connect(), or however else you want a Socket to be created. Pass the created socket to the callback function cb once created, and the HTTP request will continue to proceed.

If the https module is used to invoke the HTTP request, then the secureEndpoint property on options will be set to true.



repeat-string NPM version NPM monthly downloads NPM total downloads Linux Build Status

Repeat the given string n times. Fastest implementation for repeating a string.

Install

Install with npm:

$ npm install --save repeat-string

Usage

repeat

Repeat the given string the specified number of times.

Example:

Example

var repeat = require('repeat-string');
repeat('A', 5);
//=> AAAAA

Params

Benchmarks

Repeat string is significantly faster than the native method (which is itself faster than repeating):

# 2x
repeat-string  █████████████████████████  (26,953,977 ops/sec)
repeating      █████████                  (9,855,695 ops/sec)
native         ██████████████████         (19,453,895 ops/sec)

# 3x
repeat-string  █████████████████████████  (19,445,252 ops/sec)
repeating      ███████████                (8,661,565 ops/sec)
native         ████████████████████       (16,020,598 ops/sec)

# 10x
repeat-string  █████████████████████████  (23,792,521 ops/sec)
repeating      █████████                  (8,571,332 ops/sec)
native         ███████████████            (14,582,955 ops/sec)

# 50x
repeat-string  █████████████████████████  (23,640,179 ops/sec)
repeating      █████                      (5,505,509 ops/sec)
native         ██████████                 (10,085,557 ops/sec)

# 250x
repeat-string  █████████████████████████  (23,489,618 ops/sec)
repeating      ████                       (3,962,937 ops/sec)
native         ████████                   (7,724,892 ops/sec)

# 2000x
repeat-string  █████████████████████████  (20,315,172 ops/sec)
repeating      ████                       (3,297,079 ops/sec)
native         ███████                    (6,203,331 ops/sec)

# 20000x
repeat-string  █████████████████████████  (23,382,915 ops/sec)
repeating      ███                        (2,980,058 ops/sec)
native         █████                      (5,578,808 ops/sec)

Run the benchmarks

Install dev dependencies:

npm i -d && node benchmark

About

repeat-element: Create an array by repeating the given value n times. | homepage

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
51 jonschlinkert
2 LinusU
2 tbusser
1 doowb
1 wooorm

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.2.0, on October 23, 2016. # Tapable

var Tapable = require("tapable");

Tapable is a class for plugin binding and applying.

Just extend it.

function MyClass() {
    Tapable.call(this);
}

MyClass.prototype = Object.create(Tapable.prototype);

MyClass.prototype.method = function() {};

Or mix it in.

function MyClass2() {
    EventEmitter.call(this);
    Tapable.call(this);
}

MyClass2.prototype = Object.create(EventEmitter.prototype);
Tapable.mixin(MyClass2.prototype);

MyClass2.prototype.method = function() {};

Public functions

apply

void apply(plugins: Plugin...)

Attaches all plugins passed as arguments to the instance, by calling apply on them.

plugin

void plugin(names: string|string[], handler: Function)

names are the names (or a single name) of the plugin interfaces the class provides.

handler is a callback function. The signature depends on the class. this is the instance of the class.

restartApplyPlugins

void restartApplyPlugins()

Should only be called from a handler function.

It restarts the process of applying handers.

Protected functions

applyPlugins

void applyPlugins(name: string, args: any...)

Synchronous applies all registered handers for name. The handler functions are called with all args.

applyPluginsWaterfall

any applyPluginsWaterfall(name: string, init: any, args: any...)

Synchronous applies all registered handers for name. The handler functions are called with the return value of the previous handler and all args. For the first handler init is used and the return value of the last handler is return by applyPluginsWaterfall

applyPluginsAsync

void applyPluginsAsync(
    name: string,
    args: any...,
    callback: (err?: Error) -> void
)

Asynchronously applies all registered handers for name. The handler functions are called with all args and a callback function with the signature (err?: Error) -> void. The hander functions are called in order of registration.

callback is called after all handlers are called.

applyPluginsBailResult

any applyPluginsBailResult(name: string, args: any...)

Synchronous applies all registered handers for name. The handler function are called with all args. If a handler function returns something !== undefined, the value is returned and no more handers are applied.

applyPluginsAsyncWaterfall

applyPluginsAsyncWaterfall(
    name: string,
    init: any,
    callback: (err: Error, result: any) -> void
)

Asynchronously applies all registered handers for name. The hander functions are called with the current value and a callback function with the signature (err: Error, nextValue: any) -> void. When called nextValue is the current value for the next handler. The current value for the first handler is init. After all handlers are applied, callback is called with the last value. If any handler passes a value for err, the callback is called with this error and no more handlers are called.

applyPluginsAsyncSeries

applyPluginsAsyncSeries(
    name: string,
    args: any...,
    callback: (err: Error, result: any) -> void
)

Asynchronously applies all registered handers for name. The hander functions are called with all args and a callback function with the signature (err: Error) -> void. The handers are called in series, one at a time. After all handlers are applied, callback is called. If any handler passes a value for err, the callback is called with this error and no more handlers are called.

applyPluginsParallel

applyPluginsParallel(
    name: string,
    args: any...,
    callback: (err?: Error) -> void
)

Applies all registered handlers for name parallel. The handler functions are called with all args and a callback function with the signature (err?: Error) -> void. The callback function is called when all handlers called the callback without err. If any handler calls the callback with err, callback is invoked with this error and the other handlers are ignored.

restartApplyPlugins cannot be used.

applyPluginsParallelBailResult

applyPluginsParallelBailResult(
    name: string,
    args: any...,
    callback: (err: Error, result: any) -> void
)

Applies all registered handlers for name parallel. The handler functions are called with all args and a callback function with the signature (err?: Error) -> void. Handler functions must call the callback. They can either pass an error, or pass undefined, or pass an value. The first result (either error or value) with is not undefined is passed to the callback. The order is defined by registeration not by speed of the handler function. This function compentate this.

restartApplyPlugins cannot be used.



@datastructures-js/queue

build:? npm npm npm

A highly performant queue implementation in javascript.



Table of Contents

Install

npm install --save @datastructures-js/queue

API

require

const Queue = require('@datastructures-js/queue');

import

import Queue from '@datastructures-js/queue';

Construction

using “new Queue(array)”

Example
// empty queue
const queue = new Queue();

// from an array
const queue = new Queue([1, 2, 3]);

using “Queue.fromArray(array)”

Example
// empty queue
const queue = Queue.fromArray([]);

// with elements
const list = [10, 3, 8, 40, 1];
const queue = Queue.fromArray(list);

// If the list should not be mutated, simply construct the queue from a copy of it.
const queue = Queue.fromArray(list.slice(0));

.enqueue(element)

adds an element at the back of the queue.

params
name type
element object
runtime
O(1)

Example

queue.enqueue(10);
queue.enqueue(20);

.front()

peeks on the front element of the queue.

return
object
runtime
O(1)

Example

console.log(queue.front()); // 10

.back()

peeks on the back element in the queue.

return
object
runtime
O(1)

Example

console.log(queue.back()); // 20

.dequeue()

dequeue the front element in the queue. It does not use .shift() to dequeue an element. Instead, it uses a pointer to get the front element and only remove elements when reaching half size of the queue.

return
object
runtime
O(n*log(n))

Example

console.log(queue.dequeue()); // 10
console.log(queue.front()); // 20

Dequeuing all elements takes O(n*log(n)) instead of O(n2) if using shift().

Here’s a benchmark:

dequeuing 1 million elements in Node v12
.dequeue() .shift()
~ 40 ms ~ 3 minutes

.isEmpty()

checks if the queue is empty.

return
boolean
runtime
O(1)

Example

console.log(queue.isEmpty()); // false

.size()

returns the number of elements in the queue.

return
number
runtime
O(1)

Example

console.log(queue.size()); // 1

.clone()

creates a shallow copy of the queue.

return
Queue
runtime
O(n)

Example

const queue = Queue.fromArray([{ id: 2 }, { id: 4 } , { id: 8 }]);
const clone =  queue.clone();

clone.dequeue();

console.log(queue.front()); // { id: 2 }
console.log(clone.front()); // { id: 4 }

.toArray()

returns a copy of the remaining elements as an array.

return
array
runtime
O(n)

Example

queue.enqueue(4);
queue.enqueue(2);
console.log(queue.toArray()); // [20, 4, 2]

.clear()

clears all elements from the queue.

runtime
O(1)

Example

queue.clear();
queue.size(); // 0

Build

lint + tests

grunt build

Esrecurse Build Status

Esrecurse (esrecurse) is ECMAScript recursive traversing functionality.

Example Usage

The following code will output all variables declared at the root of a file.

esrecurse.visit(ast, {
    XXXStatement: function (node) {
        this.visit(node.left);
        // do something...
        this.visit(node.right);
    }
});

We can use Visitor instance.

var visitor = new esrecurse.Visitor({
    XXXStatement: function (node) {
        this.visit(node.left);
        // do something...
        this.visit(node.right);
    }
});

visitor.visit(ast);

We can inherit Visitor instance easily.

class Derived extends esrecurse.Visitor {
    constructor()
    {
        super(null);
    }

    XXXStatement(node) {
    }
}
function DerivedVisitor() {
    esrecurse.Visitor.call(/* this for constructor */  this  /* visitor object automatically becomes this. */);
}
util.inherits(DerivedVisitor, esrecurse.Visitor);
DerivedVisitor.prototype.XXXStatement = function (node) {
    this.visit(node.left);
    // do something...
    this.visit(node.right);
};

And you can invoke default visiting operation inside custom visit operation.

function DerivedVisitor() {
    esrecurse.Visitor.call(/* this for constructor */  this  /* visitor object automatically becomes this. */);
}
util.inherits(DerivedVisitor, esrecurse.Visitor);
DerivedVisitor.prototype.XXXStatement = function (node) {
    // do something...
    this.visitChildren(node);
};

The childVisitorKeys option does customize the behaviour of this.visitChildren(node). We can use user-defined node types.

// This tree contains a user-defined `TestExpression` node.
var tree = {
    type: 'TestExpression',

    // This 'argument' is the property containing the other **node**.
    argument: {
        type: 'Literal',
        value: 20
    },

    // This 'extended' is the property not containing the other **node**.
    extended: true
};
esrecurse.visit(
    ast,
    {
        Literal: function (node) {
            // do something...
        }
    },
    {
        // Extending the existing traversing rules.
        childVisitorKeys: {
            // TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
            TestExpression: ['argument']
        }
    }
);

We can use the fallback option as well. If the fallback option is "iteration", esrecurse would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. node.parent).

esrecurse.visit(
    ast,
    {
        Literal: function (node) {
            // do something...
        }
    },
    {
        fallback: 'iteration'
    }
);

If the fallback option is a function, esrecurse calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. node.parent).

esrecurse.visit(
    ast,
    {
        Literal: function (node) {
            // do something...
        }
    },
    {
        fallback: function (node) {
            return Object.keys(node).filter(function(key) {
                return key !== 'argument'
            });
        }
    }
);

Google Cloud Platform logo



Google Cloud Common Paginator: Node.js Client

release level npm version codecov

A result paging utility used by Google node.js modules

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Installing the client library

npm install @google-cloud/paginator

Using the client library

const {paginator} = require('@google-cloud/paginator');
console.log(paginator);

Samples

Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.

Sample Source Code Try it
Quickstart source code Open in Cloud Shell

The Google Cloud Common Paginator Node.js Client API Reference documentation also contains samples.

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

Legacy Node.js versions are supported as a best effort:

Legacy tags available

Versioning

This library follows Semantic Versioning.

This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

Apache Version 2.0

See LICENSE



type-is

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Infer the content-type of a request.

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install type-is

API

var http = require('http')
var typeis = require('type-is')

http.createServer(function (req, res) {
  var istext = typeis(req, ['text/*'])
  res.end('you ' + (istext ? 'sent' : 'did not send') + ' me text')
})

typeis(request, types)

Checks if the request is one of the types. If the request has no body, even if there is a Content-Type header, then null is returned. If the Content-Type header is invalid or does not matches any of the types, then false is returned. Otherwise, a string of the type that matched is returned.

The request argument is expected to be a Node.js HTTP request. The types argument is an array of type strings.

Each type in the types array can be one of the following:

Some examples to illustrate the inputs and returned value:

// req.headers.content-type = 'application/json'

typeis(req, ['json']) // => 'json'
typeis(req, ['html', 'json']) // => 'json'
typeis(req, ['application/*']) // => 'application/json'
typeis(req, ['application/json']) // => 'application/json'

typeis(req, ['html']) // => false

typeis.hasBody(request)

Returns a Boolean if the given request has a body, regardless of the Content-Type header.

Having a body has no relation to how large the body is (it may be 0 bytes). This is similar to how file existence works. If a body does exist, then this indicates that there is data to read from the Node.js request stream.

if (typeis.hasBody(req)) {
  // read the body, since there is one

  req.on('data', function (chunk) {
    // ...
  })
}

typeis.is(mediaType, types)

Checks if the mediaType is one of the types. If the mediaType is invalid or does not matches any of the types, then false is returned. Otherwise, a string of the type that matched is returned.

The mediaType argument is expected to be a media type string. The types argument is an array of type strings.

Each type in the types array can be one of the following:

Some examples to illustrate the inputs and returned value:

var mediaType = 'application/json'

typeis.is(mediaType, ['json']) // => 'json'
typeis.is(mediaType, ['html', 'json']) // => 'json'
typeis.is(mediaType, ['application/*']) // => 'application/json'
typeis.is(mediaType, ['application/json']) // => 'application/json'

typeis.is(mediaType, ['html']) // => false

Examples

Example body parser

var express = require('express')
var typeis = require('type-is')

var app = express()

app.use(function bodyParser (req, res, next) {
  if (!typeis.hasBody(req)) {
    return next()
  }

  switch (typeis(req, ['urlencoded', 'json', 'multipart'])) {
    case 'urlencoded':
      // parse urlencoded body
      throw new Error('implement urlencoded body parsing')
    case 'json':
      // parse json body
      throw new Error('implement json body parsing')
    case 'multipart':
      // parse multipart body
      throw new Error('implement multipart body parsing')
    default:
      // 415 error code
      res.statusCode = 415
      res.end()
      break
  }
})


file-entry-cache

Super simple cache for file metadata, useful for process that work o a given series of files and that only need to repeat the job on the changed ones since the previous run of the process — Edit

NPM Version Build Status

install

npm i --save file-entry-cache

Usage

The module exposes two functions create and createFromFile.

create(cacheName, [directory, useCheckSum])

createFromFile(pathToCache, [useCheckSum])

// loads the cache, if one does not exists for the given
// Id a new one will be prepared to be created
var fileEntryCache = require('file-entry-cache');

var cache = fileEntryCache.create('testCache');

var files = expand('../fixtures/*.txt');

// the first time this method is called, will return all the files
var oFiles = cache.getUpdatedFiles(files);

// this will persist this to disk checking each file stats and
// updating the meta attributes `size` and `mtime`.
// custom fields could also be added to the meta object and will be persisted
// in order to retrieve them later
cache.reconcile();

// use this if you want the non visited file entries to be kept in the cache
// for more than one execution
//
// cache.reconcile( true /* noPrune */)

// on a second run
var cache2 = fileEntryCache.create('testCache');

// will return now only the files that were modified or none
// if no files were modified previous to the execution of this function
var oFiles = cache.getUpdatedFiles(files);

// if you want to prevent a file from being considered non modified
// something useful if a file failed some sort of validation
// you can then remove the entry from the cache doing
cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles`
// that will effectively make the file to appear again as modified until the validation is passed. In that
// case you should not remove it from the cache

// if you need all the files, so you can determine what to do with the changed ones
// you can call
var oFiles = cache.normalizeEntries(files);

// oFiles will be an array of objects like the following
entry = {
  key: 'some/name/file', the path to the file
  changed: true, // if the file was changed since previous run
  meta: {
    size: 3242, // the size of the file
    mtime: 231231231, // the modification time of the file
    data: {} // some extra field stored for this file (useful to save the result of a transformation on the file
  }
}

Motivation for this module

I needed a super simple and dumb in-memory cache with optional disk persistence (write-back cache) in order to make a script that will beautify files with esformatter to execute only on the files that were changed since the last run.

In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second.

This module uses flat-cache a super simple key/value cache storage with optional file persistance.

The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for getChangedFiles will return only the files that were modified. Making the process to end faster.

This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the meta field, of the entries. Anything added to the meta field will be persisted. Those processes won’t need to call getChangedFiles they will instead call normalizeEntries that will return the entries with a changed field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed.

In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed.

Important notes

Google Cloud Platform logo



google-p12-pem: Node.js Client

release level npm version codecov

Convert Google .p12 keys to .pem keys.

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Installing the client library

npm install google-p12-pem

Using the client library

const {getPem} = require('google-p12-pem');

/**
 * Given a p12 file, convert it to the PEM format.
 * @param {string} pathToCert The relative path to a p12 file.
 */
async function quickstart() {
  // TODO(developer): provide the path to your cert
  // const pathToCert = 'path/to/cert.p12';

  const pem = await getPem(pathToCert);
  console.log('The converted PEM:');
  console.log(pem);
}

quickstart();

CLI style

gp12-pem myfile.p12 > output.pem

Samples

Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.

Sample Source Code Try it
Quickstart source code Open in Cloud Shell

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

Legacy Node.js versions are supported as a best effort:

Legacy tags available

Versioning

This library follows Semantic Versioning.

This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

Apache Version 2.0

See LICENSE



is-glob NPM version NPM downloads Build Status

Returns true if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience.

Install

Install with npm:

$ npm install --save is-glob

You might also be interested in is-valid-glob and has-glob.

Usage

var isGlob = require('is-glob');

True

Patterns that have glob characters or regex patterns will return true:

isGlob('!foo.js');
isGlob('*.js');
isGlob('**/abc.js');
isGlob('abc/*.js');
isGlob('abc/(aaa|bbb).js');
isGlob('abc/[a-z].js');
isGlob('abc/{a,b}.js');
isGlob('abc/?.js');
//=> true

Extglobs

isGlob('abc/@(a).js');
isGlob('abc/!(a).js');
isGlob('abc/+(a).js');
isGlob('abc/*(a).js');
isGlob('abc/?(a).js');
//=> true

False

Escaped globs or extglobs return false:

isGlob('abc/\\@(a).js');
isGlob('abc/\\!(a).js');
isGlob('abc/\\+(a).js');
isGlob('abc/\\*(a).js');
isGlob('abc/\\?(a).js');
isGlob('\\!foo.js');
isGlob('\\*.js');
isGlob('\\*\\*/abc.js');
isGlob('abc/\\*.js');
isGlob('abc/\\(aaa|bbb).js');
isGlob('abc/\\[a-z].js');
isGlob('abc/\\{a,b}.js');
isGlob('abc/\\?.js');
//=> false

Patterns that do not have glob patterns return false:

isGlob('abc.js');
isGlob('abc/def/ghi.js');
isGlob('foo.js');
isGlob('abc/@.js');
isGlob('abc/+.js');
isGlob();
isGlob(null);
//=> false

Arrays are also false (If you want to check if an array has a glob pattern, use has-glob):

isGlob(['**/*.js']);
isGlob(['foo.js']);
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
40 jonschlinkert
1 tuvistavie

Building docs

(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)

To generate the readme and API documentation with verb:

$ npm install -g verb verb-generate-readme && verb

Running tests

Install dev dependencies:

$ npm install -d && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.1.31, on October 12, 2016. # content-disposition

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Create and parse HTTP Content-Disposition header

Installation

$ npm install content-disposition

API

var contentDisposition = require('content-disposition')

contentDisposition(filename, options)

Create an attachment Content-Disposition header value using the given file name, if supplied. The filename is optional and if no file name is desired, but you want to specify options, set filename to undefined.

res.setHeader('Content-Disposition', contentDisposition('∫ maths.pdf'))

note HTTP headers are of the ISO-8859-1 character set. If you are writing this header through a means different from setHeader in Node.js, you’ll want to specify the 'binary' encoding in Node.js.

Options

contentDisposition accepts these properties in the options object.

fallback

If the filename option is outside ISO-8859-1, then the file name is actually stored in a supplemental field for clients that support Unicode file names and a ISO-8859-1 version of the file name is automatically generated.

This specifies the ISO-8859-1 file name to override the automatic generation or disables the generation all together, defaults to true.

If the filename option is ISO-8859-1 and this option is specified and has a different value, then the filename option is encoded in the extended field and this set as the fallback field, even though they are both ISO-8859-1.

type

Specifies the disposition type, defaults to "attachment". This can also be "inline", or any other value (all values except inline are treated like attachment, but can convey additional information if both parties agree to it). The type is normalized to lower-case.

contentDisposition.parse(string)

var disposition = contentDisposition.parse('attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt')

Parse a Content-Disposition header string. This automatically handles extended (“Unicode”) parameters by decoding them and providing them under the standard parameter name. This will return an object with the following properties (examples are shown for the string 'attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt'):

Examples

Send a file for download

var contentDisposition = require('content-disposition')
var destroy = require('destroy')
var fs = require('fs')
var http = require('http')
var onFinished = require('on-finished')

var filePath = '/path/to/public/plans.pdf'

http.createServer(function onRequest (req, res) {
  // set headers
  res.setHeader('Content-Type', 'application/pdf')
  res.setHeader('Content-Disposition', contentDisposition(filePath))

  // send file
  var stream = fs.createReadStream(filePath)
  stream.pipe(res)
  onFinished(res, function () {
    destroy(stream)
  })
})

Testing

$ npm test

References



unset-value NPM version NPM monthly downloads NPM total downloads Linux Build Status

Delete nested properties from an object using dot notation.

Install

Install with npm:

$ npm install --save unset-value

Usage

var unset = require('unset-value');

var obj = {a: {b: {c: 'd', e: 'f'}}};
unset(obj, 'a.b.c');
console.log(obj);
//=> {a: {b: {e: 'f'}}};

Examples

Updates the object when a property is deleted

var obj = {a: 'b'};
unset(obj, 'a');
console.log(obj);
//=> {}

Returns true when a property is deleted

unset({a: 'b'}, 'a') // true

Returns true when a property does not exist

This is consistent with delete behavior in that it does not throw when a property does not exist.

unset({a: {b: {c: 'd'}}}, 'd') // true

delete nested values

var one = {a: {b: {c: 'd'}}};
unset(one, 'a.b');
console.log(one);
//=> {a: {}}

var two = {a: {b: {c: 'd'}}};
unset(two, 'a.b.c');
console.log(two);
//=> {a: {b: {}}}

var three = {a: {b: {c: 'd', e: 'f'}}};
unset(three, 'a.b.c');
console.log(three);
//=> {a: {b: {e: 'f'}}}

throws on invalid args

unset();
// 'expected an object.'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
6 jonschlinkert
2 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.4.2, on February 25, 2017. # is-accessor-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-accessor-descriptor

Usage

var isAccessor = require('is-accessor-descriptor');

isAccessor({get: function() {}});
//=> true

You may also pass an object and property name to check if the property is an accessor:

isAccessor(foo, 'bar');

Examples

false when not an object

isAccessor('a')
isAccessor(null)
isAccessor([])
//=> false

true when the object has valid properties

and the properties all have the correct JavaScript types:

isAccessor({get: noop, set: noop})
isAccessor({get: noop})
isAccessor({set: noop})
//=> true

false when the object has invalid properties

isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> false

false when an accessor is not a function

isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> false

false when a value is not the correct type

isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
22 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-accessor-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-accessor-descriptor

Usage

var isAccessor = require('is-accessor-descriptor');

isAccessor({get: function() {}});
//=> true

You may also pass an object and property name to check if the property is an accessor:

isAccessor(foo, 'bar');

Examples

false when not an object

isAccessor('a')
isAccessor(null)
isAccessor([])
//=> false

true when the object has valid properties

and the properties all have the correct JavaScript types:

isAccessor({get: noop, set: noop})
isAccessor({get: noop})
isAccessor({set: noop})
//=> true

false when the object has invalid properties

isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> false

false when an accessor is not a function

isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> false

false when a value is not the correct type

isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
22 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-accessor-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-accessor-descriptor

Usage

var isAccessor = require('is-accessor-descriptor');

isAccessor({get: function() {}});
//=> true

You may also pass an object and property name to check if the property is an accessor:

isAccessor(foo, 'bar');

Examples

false when not an object

isAccessor('a')
isAccessor(null)
isAccessor([])
//=> false

true when the object has valid properties

and the properties all have the correct JavaScript types:

isAccessor({get: noop, set: noop})
isAccessor({get: noop})
isAccessor({set: noop})
//=> true

false when the object has invalid properties

isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> false

false when an accessor is not a function

isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> false

false when a value is not the correct type

isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
22 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-accessor-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-accessor-descriptor

Usage

var isAccessor = require('is-accessor-descriptor');

isAccessor({get: function() {}});
//=> true

You may also pass an object and property name to check if the property is an accessor:

isAccessor(foo, 'bar');

Examples

false when not an object

isAccessor('a')
isAccessor(null)
isAccessor([])
//=> false

true when the object has valid properties

and the properties all have the correct JavaScript types:

isAccessor({get: noop, set: noop})
isAccessor({get: noop})
isAccessor({set: noop})
//=> true

false when the object has invalid properties

isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> false

false when an accessor is not a function

isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> false

false when a value is not the correct type

isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
22 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript data descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-data-descriptor

Usage

var isDataDesc = require('is-data-descriptor');

Examples

true when the descriptor has valid properties with valid values.

// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> true

false when not an object

isDataDesc('a')
//=> false
isDataDesc(null)
//=> false
isDataDesc([])
//=> false

false when the object has invalid properties

isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> false

false when a value is not the correct type

isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> false

Valid properties

The only valid data descriptor properties are the following:

To be a valid data descriptor, either value or writable must be defined.

Invalid properties

A descriptor may have additional invalid properties (an error will not be thrown).

var foo = {};

Object.defineProperty(foo, 'bar', {
  enumerable: true,
  whatever: 'blah', // invalid, but doesn't cause an error
  get: function() {
    return 'baz';
  }
});

console.log(foo.bar);
//=> 'baz'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
21 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript data descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-data-descriptor

Usage

var isDataDesc = require('is-data-descriptor');

Examples

true when the descriptor has valid properties with valid values.

// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> true

false when not an object

isDataDesc('a')
//=> false
isDataDesc(null)
//=> false
isDataDesc([])
//=> false

false when the object has invalid properties

isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> false

false when a value is not the correct type

isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> false

Valid properties

The only valid data descriptor properties are the following:

To be a valid data descriptor, either value or writable must be defined.

Invalid properties

A descriptor may have additional invalid properties (an error will not be thrown).

var foo = {};

Object.defineProperty(foo, 'bar', {
  enumerable: true,
  whatever: 'blah', // invalid, but doesn't cause an error
  get: function() {
    return 'baz';
  }
});

console.log(foo.bar);
//=> 'baz'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
21 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript data descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-data-descriptor

Usage

var isDataDesc = require('is-data-descriptor');

Examples

true when the descriptor has valid properties with valid values.

// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> true

false when not an object

isDataDesc('a')
//=> false
isDataDesc(null)
//=> false
isDataDesc([])
//=> false

false when the object has invalid properties

isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> false

false when a value is not the correct type

isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> false

Valid properties

The only valid data descriptor properties are the following:

To be a valid data descriptor, either value or writable must be defined.

Invalid properties

A descriptor may have additional invalid properties (an error will not be thrown).

var foo = {};

Object.defineProperty(foo, 'bar', {
  enumerable: true,
  whatever: 'blah', // invalid, but doesn't cause an error
  get: function() {
    return 'baz';
  }
});

console.log(foo.bar);
//=> 'baz'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
21 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript data descriptor.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-data-descriptor

Usage

var isDataDesc = require('is-data-descriptor');

Examples

true when the descriptor has valid properties with valid values.

// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> true

false when not an object

isDataDesc('a')
//=> false
isDataDesc(null)
//=> false
isDataDesc([])
//=> false

false when the object has invalid properties

isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> false

false when a value is not the correct type

isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> false

Valid properties

The only valid data descriptor properties are the following:

To be a valid data descriptor, either value or writable must be defined.

Invalid properties

A descriptor may have additional invalid properties (an error will not be thrown).

var foo = {};

Object.defineProperty(foo, 'bar', {
  enumerable: true,
  whatever: 'blah', // invalid, but doesn't cause an error
  get: function() {
    return 'baz';
  }
});

console.log(foo.bar);
//=> 'baz'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
21 jonschlinkert
2 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017.



TypeScript

Build Status Devops Build Status npm version Downloads

TypeScript is a language for application-scale JavaScript. TypeScript adds optional types to JavaScript that support tools for large-scale JavaScript applications for any browser, for any host, on any OS. TypeScript compiles to readable, standards-based JavaScript. Try it out at the playground, and stay up to date via our blog and Twitter account.

Find others who are using TypeScript at our community page.

Installing

For the latest stable version:

npm install -g typescript

For our nightly builds:

npm install -g typescript@next

Contribute

There are many ways to contribute to TypeScript. * Submit bugs and help us verify fixes as they are checked in. * Review the source code changes. * Engage with other TypeScript users and developers on StackOverflow. * Help each other in the TypeScript Community Discord. * Join the #typescript discussion on Twitter. * Contribute bug fixes. * Read the archived language specification (docx, pdf, md).

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Documentation

Building

In order to build the TypeScript compiler, ensure that you have Git and Node.js installed.

Clone a copy of the repo:

git clone https://github.com/microsoft/TypeScript.git

Change to the TypeScript directory:

cd TypeScript

Install Gulp tools and dev dependencies:

npm install -g gulp
npm ci

Use one of the following to build and test:

gulp local             # Build the compiler into built/local.
gulp clean             # Delete the built compiler.
gulp LKG               # Replace the last known good with the built one.
                       # Bootstrapping step to be executed when the built compiler reaches a stable state.
gulp tests             # Build the test infrastructure using the built compiler.
gulp runtests          # Run tests using the built compiler and test infrastructure.
                       # You can override the specific suite runner used or specify a test for this command.
                       # Use --tests=<testPath> for a specific test and/or --runner=<runnerName> for a specific suite.
                       # Valid runners include conformance, compiler, fourslash, project, user, and docker
                       # The user and docker runners are extended test suite runners - the user runner
                       # works on disk in the tests/cases/user directory, while the docker runner works in containers.
                       # You'll need to have the docker executable in your system path for the docker runner to work.
gulp runtests-parallel # Like runtests, but split across multiple threads. Uses a number of threads equal to the system
                       # core count by default. Use --workers=<number> to adjust this.
gulp baseline-accept   # This replaces the baseline test results with the results obtained from gulp runtests.
gulp lint              # Runs eslint on the TypeScript source.
gulp help              # List the above commands.

Usage

node built/local/tsc.js hello.ts

Roadmap

For details on our planned features and future direction please refer to our roadmap.

Google Cloud Platform logo



node-gtoken

npm version Known Vulnerabilities codecov Code Style: Google

Node.js Google Authentication Service Account Tokens

This is a low level utility library used to interact with Google Authentication services. In most cases, you probably want to use the google-auth-library instead.

Installation

npm install gtoken

Usage

Use with a .pem or .p12 key file:

const { GoogleToken } = require('gtoken');
const gtoken = new GoogleToken({
  keyFile: 'path/to/key.pem', // or path to .p12 key file
  email: 'my_service_account_email@developer.gserviceaccount.com',
  scope: ['https://scope1', 'https://scope2'] // or space-delimited string of scopes
});

gtoken.getToken((err, tokens) => {
  if (err) {
    console.log(err);
    return;
  }
  console.log(tokens);
  // {
  //   access_token: 'very-secret-token',
  //   expires_in: 3600,
  //   token_type: 'Bearer'
  // }
});

You can also use the async/await style API:

const tokens = await gtoken.getToken()
console.log(tokens);

Or use promises:

gtoken.getToken()
  .then(tokens => {
    console.log(tokens)
  })
  .catch(console.error);

Use with a service account .json key file:

const { GoogleToken } = require('gtoken');
const gtoken = new GoogleToken({
  keyFile: 'path/to/key.json',
  scope: ['https://scope1', 'https://scope2'] // or space-delimited string of scopes
});

gtoken.getToken((err, tokens) => {
  if (err) {
    console.log(err);
    return;
  }
  console.log(tokens);
});

Pass the private key as a string directly:

const key = '-----BEGIN RSA PRIVATE KEY-----\nXXXXXXXXXXX...';
const { GoogleToken } = require('gtoken');
const gtoken = new GoogleToken({
  email: 'my_service_account_email@developer.gserviceaccount.com',
  scope: ['https://scope1', 'https://scope2'], // or space-delimited string of scopes
  key: key
});

Options

Various options that can be set when creating initializing the gtoken object.

.getToken(callback)

Returns the cached tokens or requests a new one and returns it.

gtoken.getToken((err, token) => {
  console.log(err || token);
  // gtoken.rawToken value is also set
});

.getCredentials(‘path/to/key.json’)

Given a keyfile, returns the key and (if available) the client email.

const creds = await gtoken.getCredentials('path/to/key.json');

Properties

Various properties set on the gtoken object after call to .getToken().

.hasExpired()

Returns true if the token has expired, or token does not exist.

const tokens = await gtoken.getToken();
gtoken.hasExpired(); // false

.revokeToken()

Revoke the token if set.

await gtoken.revokeToken();
console.log('Token revoked!');

Downloading your private .p12 key from Google

  1. Open the Google Developer Console.
  2. Open your project and under “APIs & auth”, click Credentials.
  3. Generate a new .p12 key and download it into your project.

Converting your .p12 key to a .pem key

You can just specify your .p12 file (with .p12 extension) as the keyFile and it will automatically be converted to a .pem on the fly, however this results in a slight performance hit. If you’d like to convert to a .pem for use later, use OpenSSL if you have it installed.

$ openssl pkcs12 -in key.p12 -nodes -nocerts > key.pem

Don’t forget, the passphrase when converting these files is the string 'notasecret'



node-jwa Build Status

A JSON Web Algorithms implementation focusing (exclusively, at this point) on the algorithms necessary for JSON Web Signatures.

This library supports all of the required, recommended and optional cryptographic algorithms for JWS:

alg Parameter Value Digital Signature or MAC Algorithm
HS256 HMAC using SHA-256 hash algorithm
HS384 HMAC using SHA-384 hash algorithm
HS512 HMAC using SHA-512 hash algorithm
RS256 RSASSA using SHA-256 hash algorithm
RS384 RSASSA using SHA-384 hash algorithm
RS512 RSASSA using SHA-512 hash algorithm
PS256 RSASSA-PSS using SHA-256 hash algorithm
PS384 RSASSA-PSS using SHA-384 hash algorithm
PS512 RSASSA-PSS using SHA-512 hash algorithm
ES256 ECDSA using P-256 curve and SHA-256 hash algorithm
ES384 ECDSA using P-384 curve and SHA-384 hash algorithm
ES512 ECDSA using P-521 curve and SHA-512 hash algorithm
none No digital signature or MAC value included

Please note that PS* only works on Node 6.12+ (excluding 7.x).



Requirements

In order to run the tests, a recent version of OpenSSL is required. The version that comes with OS X (OpenSSL 0.9.8r 8 Feb 2011) is not recent enough, as it does not fully support ECDSA keys. You’ll need to use a version > 1.0.0; I tested with OpenSSL 1.0.1c 10 May 2012.



Testing

To run the tests, do

$ npm test

This will generate a bunch of keypairs to use in testing. If you want to generate new keypairs, do make clean before running npm test again.

Methodology

I spawn openssl dgst -sign to test OpenSSL sign → JS verify and openssl dgst -verify to test JS sign → OpenSSL verify for each of the RSA and ECDSA algorithms.



Usage

jwa(algorithm)

Creates a new jwa object with sign and verify methods for the algorithm. Valid values for algorithm can be found in the table above ('HS256', 'HS384', etc) and are case-sensitive. Passing an invalid algorithm value will throw a TypeError.

jwa#sign(input, secretOrPrivateKey)

Sign some input with either a secret for HMAC algorithms, or a private key for RSA and ECDSA algorithms.

If input is not already a string or buffer, JSON.stringify will be called on it to attempt to coerce it.

For the HMAC algorithm, secretOrPrivateKey should be a string or a buffer. For ECDSA and RSA, the value should be a string representing a PEM encoded private key.

Output base64url formatted. This is for convenience as JWS expects the signature in this format. If your application needs the output in a different format, please open an issue. In the meantime, you can use brianloveswords/base64url to decode the signature.

As of nodejs v0.11.8, SPKAC support was introduce. If your nodeJs version satisfies, then you can pass an object { key: '..', passphrase: '...' }

jwa#verify(input, signature, secretOrPublicKey)

Verify a signature. Returns true or false.

signature should be a base64url encoded string.

For the HMAC algorithm, secretOrPublicKey should be a string or a buffer. For ECDSA and RSA, the value should be a string represented a PEM encoded public key.



Example

HMAC

const jwa = require('jwa');

const hmac = jwa('HS256');
const input = 'super important stuff';
const secret = 'shhhhhh';

const signature = hmac.sign(input, secret);
hmac.verify(input, signature, secret) // === true
hmac.verify(input, signature, 'trickery!') // === false

With keys

const fs = require('fs');
const jwa = require('jwa');
const privateKey = fs.readFileSync(__dirname + '/ecdsa-p521-private.pem');
const publicKey = fs.readFileSync(__dirname + '/ecdsa-p521-public.pem');

const ecdsa = jwa('ES512');
const input = 'very important stuff';

const signature = ecdsa.sign(input, privateKey);
ecdsa.verify(input, signature, publicKey) // === true


normalize-path NPM version NPM monthly downloads NPM total downloads Linux Build Status

Normalize slashes in a file path to be posix/unix-like forward slashes. Also condenses repeat slashes to a single slash and removes and trailing slashes, unless disabled.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save normalize-path

Usage

const normalize = require('normalize-path');

console.log(normalize('\\foo\\bar\\baz\\')); 
//=> '/foo/bar/baz'

win32 namespaces

console.log(normalize('\\\\?\\UNC\\Server01\\user\\docs\\Letter.txt')); 
//=> '//?/UNC/Server01/user/docs/Letter.txt'

console.log(normalize('\\\\.\\CdRomX')); 
//=> '//./CdRomX'

Consecutive slashes

Condenses multiple consecutive forward slashes (except for leading slashes in win32 namespaces) to a single slash.

console.log(normalize('.//foo//bar///////baz/')); 
//=> './foo/bar/baz'

Trailing slashes

By default trailing slashes are removed. Pass false as the last argument to disable this behavior and keep trailing slashes:

console.log(normalize('foo\\bar\\baz\\', false)); //=> 'foo/bar/baz/'
console.log(normalize('./foo/bar/baz/', false)); //=> './foo/bar/baz/'

Release history

v3.0

No breaking changes in this release.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Other useful path-related libraries:

Commits Contributor
35 jonschlinkert
1 phated

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on April 19, 2018. # word-wrap NPM version NPM monthly downloads NPM total downloads Linux Build Status

Wrap words to a specified length.

Install

Install with npm:

$ npm install --save word-wrap

Usage

var wrap = require('word-wrap');

wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.');

Results in:

  Lorem ipsum dolor sit amet, consectetur adipiscing
  elit, sed do eiusmod tempor incididunt ut labore
  et dolore magna aliqua. Ut enim ad minim veniam,
  quis nostrud exercitation ullamco laboris nisi ut
  aliquip ex ea commodo consequat.

Options

image
image

options.width

Type: Number

Default: 50

The width of the text before wrapping to a new line.

Example:

wrap(str, {width: 60});

options.indent

Type: String

Default: `` (two spaces)

The string to use at the beginning of each line.

Example:

wrap(str, {indent: '      '});

options.newline

Type: String

Default: \n

The string to use at the end of each line.

Example:

wrap(str, {newline: '\n\n'});

options.escape

Type: function

Default: function(str){return str;}

An escape function to run on each line after splitting them.

Example:

var xmlescape = require('xml-escape');
wrap(str, {
  escape: function(string){
    return xmlescape(string);
  }
});

options.trim

Type: Boolean

Default: false

Trim trailing whitespace from the returned string. This option is included since .trim() would also strip the leading indentation from the first line.

Example:

wrap(str, {trim: true});

options.cut

Type: Boolean

Default: false

Break a word between any two letters when the word is longer than the specified width.

Example:

wrap(str, {cut: true});

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
43 jonschlinkert
2 lordvlad
2 hildjj
1 danilosampaio
1 2fd
1 toddself
1 wolfgang42
1 zachhale

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on June 02, 2017. # regexpp

npm version Downloads/month Build Status codecov Dependency Status

A regular expression parser for ECMAScript.

💿 Installation

$ npm install regexpp

📖 Usage

import {
    AST,
    RegExpParser,
    RegExpValidator,
    RegExpVisitor,
    parseRegExpLiteral,
    validateRegExpLiteral,
    visitRegExpAST
} from "regexpp"

parseRegExpLiteral(source, options?)

Parse a given regular expression literal then make AST object.

This is equivalent to new RegExpParser(options).parseLiteral(source).

validateRegExpLiteral(source, options?)

Validate a given regular expression literal.

This is equivalent to new RegExpValidator(options).validateLiteral(source).

visitRegExpAST(ast, handlers)

Visit each node of a given AST.

This is equivalent to new RegExpVisitor(handlers).visit(ast).

RegExpParser

new RegExpParser(options?)

parser.parseLiteral(source, start?, end?)

Parse a regular expression literal.

parser.parsePattern(source, start?, end?, uFlag?)

Parse a regular expression pattern.

parser.parseFlags(source, start?, end?)

Parse a regular expression flags.

RegExpValidator

new RegExpValidator(options)

validator.validateLiteral(source, start, end)

Validate a regular expression literal.

validator.validatePattern(source, start, end, uFlag)

Validate a regular expression pattern.

validator.validateFlags(source, start, end)

Validate a regular expression flags.

RegExpVisitor

new RegExpVisitor(handlers)

visitor.visit(ast)

Validate a regular expression literal.

📰 Changelog

🍻 Contributing

Welcome contributing!

Please use GitHub’s Issues/PRs.

Development Tools



http-errors

NPM Version NPM Downloads Node.js Version Build Status Test Coverage

Create HTTP errors for Express, Koa, Connect, etc. with ease.

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install http-errors

Example

var createError = require('http-errors')
var express = require('express')
var app = express()

app.use(function (req, res, next) {
  if (!req.user) return next(createError(401, 'Please login to view this page.'))
  next()
})

API

Error Properties

createError([status], message, properties)

Create a new error object with the given message msg. The error object inherits from createError.HttpError.

var err = createError(404, 'This video does not exist!')

createError([status], error, properties)

Extend the given error object with createError.HttpError properties. This will not alter the inheritance of the given error object, and the modified error object is the return value.

fs.readFile('foo.txt', function (err, buf) {
  if (err) {
    if (err.code === 'ENOENT') {
      var httpError = createError(404, err, { expose: false })
    } else {
      var httpError = createError(500, err)
    }
  }
})

new createError[code || name]([msg]))

Create a new error object with the given message msg. The error object inherits from createError.HttpError.

var err = new createError.NotFound()

List of all constructors

Status Code Constructor Name
400 BadRequest
401 Unauthorized
402 PaymentRequired
403 Forbidden
404 NotFound
405 MethodNotAllowed
406 NotAcceptable
407 ProxyAuthenticationRequired
408 RequestTimeout
409 Conflict
410 Gone
411 LengthRequired
412 PreconditionFailed
413 PayloadTooLarge
414 URITooLong
415 UnsupportedMediaType
416 RangeNotSatisfiable
417 ExpectationFailed
418 ImATeapot
421 MisdirectedRequest
422 UnprocessableEntity
423 Locked
424 FailedDependency
425 UnorderedCollection
426 UpgradeRequired
428 PreconditionRequired
429 TooManyRequests
431 RequestHeaderFieldsTooLarge
451 UnavailableForLegalReasons
500 InternalServerError
501 NotImplemented
502 BadGateway
503 ServiceUnavailable
504 GatewayTimeout
505
506 VariantAlsoNegotiates
507 InsufficientStorage
508 LoopDetected
509 BandwidthLimitExceeded
510 NotExtended
511 NetworkAuthenticationRequired


fastq

ci npm version Dependency Status

Fast, in memory work queue.

Benchmarks (1 million tasks):

Obtained on node 12.16.1, on a dedicated server.

If you need zero-overhead series function call, check out fastseries. For zero-overhead parallel function call, check out fastparallel.

js-standard-style

Install

npm i fastq --save

Usage

'use strict'

var queue = require('fastq')(worker, 1)

queue.push(42, function (err, result) {
  if (err) { throw err }
  console.log('the result is', result)
})

function worker (arg, cb) {
  cb(null, 42 * 2)
}

Setting this

'use strict'

var that = { hello: 'world' }
var queue = require('fastq')(that, worker, 1)

queue.push(42, function (err, result) {
  if (err) { throw err }
  console.log(this)
  console.log('the result is', result)
})

function worker (arg, cb) {
  console.log(this)
  cb(null, 42 * 2)
}

API

### fastqueue([that], worker, concurrency)
Creates a new queue.
Arguments:
* that, optional context of the worker function. * worker, worker function, it would be called with that as this, if that is specified. * concurrency, number of concurrent tasks that could be executed in parallel.

### queue.push(task, done)

Add a task at the end of the queue. done(err, result) will be called when the task was processed.

### queue.unshift(task, done)
Add a task at the beginning of the queue. done(err, result) will be called when the task was processed.

### queue.pause()

Pause the processing of tasks. Currently worked tasks are not stopped.

### queue.resume()
Resume the processing of tasks.

### queue.idle()

Returns false if there are tasks being processed or waiting to be processed. true otherwise.

### queue.length()
Returns the number of tasks waiting to be processed (in the queue).

### queue.getQueue()

Returns all the tasks be processed (in the queue). Returns empty array when there are no tasks

### queue.kill()
Removes all tasks waiting to be processed, and reset drain to an empty function.

### queue.killAndDrain()

Same than kill but the drain function will be called before reset to empty.

### queue.error(handler)
Set a global error handler. handler(err, task) will be called when any of the tasks return an error.

### queue.concurrency

Property that returns the number of concurrent tasks that could be executed in parallel. It can be altered at runtime.

### queue.drain
Function that will be called when the last item from the queue has been processed by a worker. It can be altered at runtime.

### queue.empty

Function that will be called when the last item from the queue has been assigned to a worker. It can be altered at runtime.


### queue.saturated

Function that will be called when the queue hits the concurrency limit. It can be altered at runtime.

ISC



Yargs

Yargs be a node.js library fer hearties tryin’ ter parse optstrings


ci NPM version js-standard-style Coverage Conventional Commits Slack

Description

Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface.

It gives you:

mocha [spec..]

Run tests with Mocha

Commands
  mocha inspect [spec..]  Run tests with Mocha                         [default]
  mocha init <path>       create a client-side Mocha setup at <path>

Rules & Behavior
  --allow-uncaught           Allow uncaught errors to propagate        [boolean]
  --async-only, -A           Require all tests to use a callback (async) or
                             return a Promise                          [boolean]

Installation

Stable version:

npm i yargs

Bleeding edge version with the most recent features:

npm i yargs@next

Usage

Simple Example

#!/usr/bin/env node
const yargs = require('yargs/yargs')
const { hideBin } = require('yargs/helpers')
const argv = yargs(hideBin(process.argv)).argv

if (argv.ships > 3 && argv.distance < 53.5) {
  console.log('Plunder more riffiwobbles!')
} else {
  console.log('Retreat from the xupptumblers!')
}
$ ./plunder.js --ships=4 --distance=22
Plunder more riffiwobbles!

$ ./plunder.js --ships 12 --distance 98.7
Retreat from the xupptumblers!

Complex Example

#!/usr/bin/env node
const yargs = require('yargs/yargs')
const { hideBin } = require('yargs/helpers')

yargs(hideBin(process.argv))
  .command('serve [port]', 'start the server', (yargs) => {
    yargs
      .positional('port', {
        describe: 'port to bind on',
        default: 5000
      })
  }, (argv) => {
    if (argv.verbose) console.info(`start server on :${argv.port}`)
    serve(argv.port)
  })
  .option('verbose', {
    alias: 'v',
    type: 'boolean',
    description: 'Run with verbose logging'
  })
  .argv

Run the example above with --help to see the help for the application.

TypeScript

yargs has type definitions at [@types/yargs]type-definitions.

npm i @types/yargs --save-dev

See usage examples in docs.

Deno

As of v16, yargs supports Deno:

import yargs from 'https://deno.land/x/yargs/deno.ts'
import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts'

yargs(Deno.args)
  .command('download <files...>', 'download a list of files', (yargs: any) => {
    return yargs.positional('files', {
      describe: 'a list of files to do something with'
    })
  }, (argv: Arguments) => {
    console.info(argv)
  })
  .strictCommands()
  .demandCommand(1)
  .argv

ESM

As of v16,yargs supports ESM imports:

import yargs from 'yargs'
import { hideBin } from 'yargs/helpers'

yargs(hideBin(process.argv))
  .command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => {
    console.info(argv)
  })
  .demandCommand(1)
  .argv

Usage in Browser

See examples of using yargs in the browser in docs.

Community

Having problems? want to contribute? join our community slack.

Documentation

Table of Contents

Libraries in this ecosystem make a best effort to track Node.js’ release schedule. Here’s a post on why we think this is important.



interpret

NPM version Downloads Travis Build Status AppVeyor Build Status Coveralls Status Gitter chat

A dictionary of file extensions and associated module loaders.

What is it

This is used by Liftoff to automatically require dependencies for configuration files, and by rechoir for registering module loaders.

API

extensions

Map file types to modules which provide a require.extensions loader.

{
  '.babel.js': [
    {
      module: '@babel/register',
      register: function(hook) {
        // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353
        // which only captures the final extension (.babel.js -> .js)
        hook({ extensions: '.js' });
      },
    },
    {
      module: 'babel-register',
      register: function(hook) {
        hook({ extensions: '.js' });
      },
    },
    {
      module: 'babel-core/register',
      register: function(hook) {
        hook({ extensions: '.js' });
      },
    },
    {
      module: 'babel/register',
      register: function(hook) {
        hook({ extensions: '.js' });
      },
    },
  ],
  '.babel.ts': [
    {
      module: '@babel/register',
      register: function(hook) {
        hook({ extensions: '.ts' });
      },
    },
  ],
  '.buble.js': 'buble/register',
  '.cirru': 'cirru-script/lib/register',
  '.cjsx': 'node-cjsx/register',
  '.co': 'coco',
  '.coffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'],
  '.coffee.md': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'],
  '.csv': 'require-csv',
  '.eg': 'earlgrey/register',
  '.esm.js': {
    module: 'esm',
    register: function(hook) {
      // register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353
      // which only captures the final extension (.babel.js -> .js)
      var esmLoader = hook(module);
      require.extensions['.js'] = esmLoader('module')._extensions['.js'];
    },
  },
  '.iced': ['iced-coffee-script/register', 'iced-coffee-script'],
  '.iced.md': 'iced-coffee-script/register',
  '.ini': 'require-ini',
  '.js': null,
  '.json': null,
  '.json5': 'json5/lib/require',
  '.jsx': [
    {
      module: '@babel/register',
      register: function(hook) {
        hook({ extensions: '.jsx' });
      },
    },
    {
      module: 'babel-register',
      register: function(hook) {
        hook({ extensions: '.jsx' });
      },
    },
    {
      module: 'babel-core/register',
      register: function(hook) {
        hook({ extensions: '.jsx' });
      },
    },
    {
      module: 'babel/register',
      register: function(hook) {
        hook({ extensions: '.jsx' });
      },
    },
    {
      module: 'node-jsx',
      register: function(hook) {
        hook.install({ extension: '.jsx', harmony: true });
      },
    },
  ],
  '.litcoffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'],
  '.liticed': 'iced-coffee-script/register',
  '.ls': ['livescript', 'LiveScript'],
  '.mjs': '/absolute/path/to/interpret/mjs-stub.js',
  '.node': null,
  '.toml': {
    module: 'toml-require',
    register: function(hook) {
      hook.install();
    },
  },
  '.ts': [
    'ts-node/register',
    'typescript-node/register',
    'typescript-register',
    'typescript-require',
    'sucrase/register/ts',
    {
      module: '@babel/register',
      register: function(hook) {
        hook({ extensions: '.ts' });
      },
    },
  ],
  '.tsx': [
    'ts-node/register',
    'typescript-node/register',
    'sucrase/register',
    {
      module: '@babel/register',
      register: function(hook) {
        hook({ extensions: '.tsx' });
      },
    },
  ],
  '.wisp': 'wisp/engine/node',
  '.xml': 'require-xml',
  '.yaml': 'require-yaml',
  '.yml': 'require-yaml',
}

jsVariants

Same as above, but only include the extensions which are javascript variants.

How to use it

Consumers should use the exported extensions or jsVariants object to determine which module should be loaded for a given extension. If a matching extension is found, consumers should do the following:

  1. If the value is null, do nothing.

  2. If the value is a string, try to require it.

  3. If the value is an object, try to require the module property. If successful, the register property (a function) should be called with the module passed as the first argument.

  4. If the value is an array, iterate over it, attempting step #2 or #3 until one of the attempts does not throw.

Google Cloud Platform logo



Google Cloud Common Promisify: Node.js Client

release level npm version codecov

A simple utility for promisifying functions and classes.

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Installing the client library

npm install @google-cloud/promisify

Using the client library

const {promisify} = require('@google-cloud/promisify');

/**
 * This is a very basic example function that accepts a callback.
 */
function someCallbackFunction(name, callback) {
  if (!name) {
    callback(new Error('Name is required!'));
  } else {
    callback(null, `Well hello there, ${name}!`);
  }
}

// let's promisify it!
const somePromiseFunction = promisify(someCallbackFunction);

async function quickstart() {
  // now we can just `await` the function to use it like a promisified method
  const [result] = await somePromiseFunction('nodestronaut');
  console.log(result);
}
quickstart();

It’s unlikely you will need to install this package directly, as it will be installed as a dependency when you install other @google-cloud packages.

Samples

Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.

Sample Source Code Try it
Quickstart source code Open in Cloud Shell

The Google Cloud Common Promisify Node.js Client API Reference documentation also contains samples.

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

Legacy Node.js versions are supported as a best effort:

Legacy tags available

Versioning

This library follows Semantic Versioning.

This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

Apache Version 2.0

See LICENSE



is-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.

Install

Install with npm:

$ npm install --save is-descriptor

Usage

var isDescriptor = require('is-descriptor');

isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> false

You may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.

var obj = {};
obj.foo = 'abc';

Object.defineProperty(obj, 'bar', {
  value: 'xyz'
});

isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> true

Examples

value type

false when not an object

isDescriptor('a');
//=> false
isDescriptor(null);
//=> false
isDescriptor([]);
//=> false

data descriptor

true when the object has valid properties with valid values.

isDescriptor({value: 'foo'});
//=> true
isDescriptor({value: noop});
//=> true

false when the object has invalid properties

isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> false

false when a value is not the correct type

isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> false

accessor descriptor

true when the object has valid properties with valid values.

isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> true

false when the object has invalid properties

isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> false

false when an accessor is not a function

isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> false

false when a value is not the correct type

isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
24 jonschlinkert
1 doowb
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.

Install

Install with npm:

$ npm install --save is-descriptor

Usage

var isDescriptor = require('is-descriptor');

isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> false

You may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.

var obj = {};
obj.foo = 'abc';

Object.defineProperty(obj, 'bar', {
  value: 'xyz'
});

isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> true

Examples

value type

false when not an object

isDescriptor('a');
//=> false
isDescriptor(null);
//=> false
isDescriptor([]);
//=> false

data descriptor

true when the object has valid properties with valid values.

isDescriptor({value: 'foo'});
//=> true
isDescriptor({value: noop});
//=> true

false when the object has invalid properties

isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> false

false when a value is not the correct type

isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> false

accessor descriptor

true when the object has valid properties with valid values.

isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> true

false when the object has invalid properties

isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> false

false when an accessor is not a function

isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> false

false when a value is not the correct type

isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
24 jonschlinkert
1 doowb
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.

Install

Install with npm:

$ npm install --save is-descriptor

Usage

var isDescriptor = require('is-descriptor');

isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> false

You may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.

var obj = {};
obj.foo = 'abc';

Object.defineProperty(obj, 'bar', {
  value: 'xyz'
});

isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> true

Examples

value type

false when not an object

isDescriptor('a');
//=> false
isDescriptor(null);
//=> false
isDescriptor([]);
//=> false

data descriptor

true when the object has valid properties with valid values.

isDescriptor({value: 'foo'});
//=> true
isDescriptor({value: noop});
//=> true

false when the object has invalid properties

isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> false

false when a value is not the correct type

isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> false

accessor descriptor

true when the object has valid properties with valid values.

isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> true

false when the object has invalid properties

isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> false

false when an accessor is not a function

isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> false

false when a value is not the correct type

isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
24 jonschlinkert
1 doowb
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.

Install

Install with npm:

$ npm install --save is-descriptor

Usage

var isDescriptor = require('is-descriptor');

isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> false

You may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.

var obj = {};
obj.foo = 'abc';

Object.defineProperty(obj, 'bar', {
  value: 'xyz'
});

isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> true

Examples

value type

false when not an object

isDescriptor('a');
//=> false
isDescriptor(null);
//=> false
isDescriptor([]);
//=> false

data descriptor

true when the object has valid properties with valid values.

isDescriptor({value: 'foo'});
//=> true
isDescriptor({value: noop});
//=> true

false when the object has invalid properties

isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> false

false when a value is not the correct type

isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> false

accessor descriptor

true when the object has valid properties with valid values.

isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> true

false when the object has invalid properties

isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> false

false when an accessor is not a function

isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> false

false when a value is not the correct type

isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
24 jonschlinkert
1 doowb
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.

Install

Install with npm:

$ npm install --save is-descriptor

Usage

var isDescriptor = require('is-descriptor');

isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> false

You may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.

var obj = {};
obj.foo = 'abc';

Object.defineProperty(obj, 'bar', {
  value: 'xyz'
});

isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> true

Examples

value type

false when not an object

isDescriptor('a');
//=> false
isDescriptor(null);
//=> false
isDescriptor([]);
//=> false

data descriptor

true when the object has valid properties with valid values.

isDescriptor({value: 'foo'});
//=> true
isDescriptor({value: noop});
//=> true

false when the object has invalid properties

isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> false

false when a value is not the correct type

isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> false

accessor descriptor

true when the object has valid properties with valid values.

isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> true

false when the object has invalid properties

isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> false

false when an accessor is not a function

isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> false

false when a value is not the correct type

isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
24 jonschlinkert
1 doowb
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # rc

The non-configurable configuration loader for lazy people.

Usage

The only option is to pass rc the name of your app, and your default configuration.

var conf = require('rc')(appname, {
  //defaults go here.
  port: 2468,

  //defaults which are objects will be merged, not replaced
  views: {
    engine: 'jade'
  }
});

rc will return your configuration options merged with the defaults you specify. If you pass in a predefined defaults object, it will be mutated:

var conf = {};
require('rc')(appname, conf);

If rc finds any config files for your app, the returned config object will have a configs array containing their paths:

var appCfg = require('rc')(appname, conf);
appCfg.configs[0] // /etc/appnamerc
appCfg.configs[1] // /home/dominictarr/.config/appname
appCfg.config // same as appCfg.configs[appCfg.configs.length - 1]

Standards

Given your application name (appname), rc will look in all the obvious places for configuration.

All configuration sources that were found will be flattened into one object, so that sources earlier in this list override later ones.

Configuration File Formats

Configuration files (e.g. .appnamerc) may be in either json or ini format. No file extension (.json or .ini) should be used. The example configurations below are equivalent:

Formatted as ini

; You can include comments in `ini` format if you want.

dependsOn=0.10.0


; `rc` has built-in support for ini sections, see?

[commands]
  www     = ./commands/www
  console = ./commands/repl


; You can even do nested sections

[generators.options]
  engine  = ejs

[generators.modules]
  new     = generate-new
  engine  = generate-backend

Formatted as json

{
  // You can even comment your JSON, if you want
  "dependsOn": "0.10.0",
  "commands": {
    "www": "./commands/www",
    "console": "./commands/repl"
  },
  "generators": {
    "options": {
      "engine": "ejs"
    },
    "modules": {
      "new": "generate-new",
      "backend": "generate-backend"
    }
  }
}

Comments are stripped from JSON config via strip-json-comments.

Since ini, and env variables do not have a standard for types, your application needs be prepared for strings.

To ensure that string representations of booleans and numbers are always converted into their proper types (especially useful if you intend to do strict === comparisons), consider using a module such as parse-strings-in-object to wrap the config object returned from rc.

Simple example demonstrating precedence

Assume you have an application like this (notice the hard-coded defaults passed to rc):

const conf = require('rc')('myapp', {
    port: 12345,
    mode: 'test'
});

console.log(JSON.stringify(conf, null, 2));

You also have a file config.json, with these contents:

{
  "port": 9000,
  "foo": "from config json",
  "something": "else"
}

And a file .myapprc in the same folder, with these contents:

{
  "port": "3001",
  "foo": "bar"
}

Here is the expected output from various commands:

node .

{
  "port": "3001",
  "mode": "test",
  "foo": "bar",
  "_": [],
  "configs": [
    "/Users/stephen/repos/conftest/.myapprc"
  ],
  "config": "/Users/stephen/repos/conftest/.myapprc"
}

Default mode from hard-coded object is retained, but port is overridden by .myapprc file (automatically found based on appname match), and foo is added.

node . --foo baz

{
  "port": "3001",
  "mode": "test",
  "foo": "baz",
  "_": [],
  "configs": [
    "/Users/stephen/repos/conftest/.myapprc"
  ],
  "config": "/Users/stephen/repos/conftest/.myapprc"
}

Same result as above but foo is overridden because command-line arguments take precedence over .myapprc file.

node . --foo barbar --config config.json

{
  "port": 9000,
  "mode": "test",
  "foo": "barbar",
  "something": "else",
  "_": [],
  "config": "config.json",
  "configs": [
    "/Users/stephen/repos/conftest/.myapprc",
    "config.json"
  ]
}

Now the port comes from the config.json file specified (overriding the value from .myapprc), and foo value is overriden by command-line despite also being specified in the config.json file.

Advanced Usage

Pass in your own argv

You may pass in your own argv as the third argument to rc. This is in case you want to use your own command-line opts parser.

require('rc')(appname, defaults, customArgvParser);

Pass in your own parser

If you have a special need to use a non-standard parser, you can do so by passing in the parser as the 4th argument. (leave the 3rd as null to get the default args parser)

require('rc')(appname, defaults, null, parser);

This may also be used to force a more strict format, such as strict, valid JSON only.

Note on Performance

rc is running fs.statSync– so make sure you don’t use it in a hot code path (e.g. a request handler)



lru cache

A cache object that deletes the least-recently-used items.

Build Status Coverage Status

Installation:

npm install lru-cache --save

Usage:

var LRU = require("lru-cache")
  , options = { max: 500
              , length: function (n, key) { return n * 2 + key.length }
              , dispose: function (key, n) { n.close() }
              , maxAge: 1000 * 60 * 60 }
  , cache = new LRU(options)
  , otherCache = new LRU(50) // sets just the max size

cache.set("key", "value")
cache.get("key") // "value"

// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)

cache.reset()    // empty the cache

If you put more stuff in it, then items will fall out.

If you try to put an oversized thing in it, then it’ll fall out right away.

Options

API



Punycode.js Build status Code coverage status Dependency status

A robust Punycode converter that fully complies to RFC 3492 and RFC 5891, and works on nearly all JavaScript platforms.

This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:

This project is bundled with Node.js v0.6.2+.

Installation

Via npm (only required for Node.js releases older than v0.6.2):

npm install punycode

Via Bower:

bower install punycode

Via Component:

component install bestiejs/punycode.js

In a browser:

<script src="punycode.js"></script>

In Narwhal, Node.js, and RingoJS:

var punycode = require('punycode');

In Rhino:

load('punycode.js');

Using an AMD loader like RequireJS:

require(
  {
    'paths': {
      'punycode': 'path/to/punycode'
    }
  },
  ['punycode'],
  function(punycode) {
    console.log(punycode);
  }
);

API

punycode.decode(string)

Converts a Punycode string of ASCII symbols to a string of Unicode symbols.

// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'

punycode.encode(string)

Converts a string of Unicode symbols to a Punycode string of ASCII symbols.

// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'

punycode.toUnicode(input)

Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode.

// decode domain names
punycode.toUnicode('xn--maana-pta.com');
// → 'mañana.com'
punycode.toUnicode('xn----dqo34k.com');
// → '☃-⌘.com'

// decode email addresses
punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq');
// → 'джумла@джpумлатест.bрфa'

punycode.toASCII(input)

Converts a Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII.

// encode domain names
punycode.toASCII('mañana.com');
// → 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com');
// → 'xn----dqo34k.com'

// encode email addresses
punycode.toASCII('джумла@джpумлатест.bрфa');
// → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'

punycode.ucs2

punycode.ucs2.decode(string)

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]

punycode.ucs2.encode(codePoints)

Creates a string based on an array of numeric code point values.

punycode.ucs2.encode([0x61, 0x62, 0x63]);
// → 'abc'
punycode.ucs2.encode([0x1D306]);
// → '\uD834\uDF06'

punycode.version

A string representing the current Punycode.js version number.

Unit tests & code coverage

After cloning this repository, run npm install --dev to install the dependencies needed for Punycode.js development and testing. You may want to install Istanbul globally using npm install istanbul -g.

Once that’s done, you can run the unit tests in Node using npm test or node tests/tests.js. To run the tests in Rhino, Ringo, Narwhal, PhantomJS, and web browsers as well, use grunt test.

To generate the code coverage report, use grunt cover.

Feel free to fork if you see possible improvements!

Author

twitter/mathias
Mathias Bynens
twitter/jdalton
John-David Dalton


ESLint Template Visitor

Build Status Coverage Status

Simplify eslint rules by visiting templates

Install

npm install eslint-template-visitor

# or

yarn add eslint-template-visitor

Showcase

+const eslintTemplateVisitor = require('eslint-template-visitor');
+
+const templates = eslintTemplateVisitor();
+
+const objectVariable = templates.variable();
+const argumentsVariable = templates.spreadVariable();
+
+const substrCallTemplate = templates.template`${objectVariable}.substr(${argumentsVariable})`;

 const create = context => {
    const sourceCode = context.getSourceCode();

-   return {
-       CallExpression(node) {
-           if (node.callee.type !== 'MemberExpression'
-               || node.callee.property.type !== 'Identifier'
-               || node.callee.property.name !== 'substr'
-           ) {
-               return;
-           }
-
-           const objectNode = node.callee.object;
+   return templates.visitor({
+       [substrCallTemplate](node) {
+           const objectNode = substrCallTemplate.context.getMatch(objectVariable);
+           const argumentNodes = substrCallTemplate.context.getMatch(argumentsVariable);

            const problem = {
                node,
                message: 'Prefer `String#slice()` over `String#substr()`.',
            };

-           const canFix = node.arguments.length === 0;
+           const canFix = argumentNodes.length === 0;

            if (canFix) {
                problem.fix = fixer => fixer.replaceText(node, sourceCode.getText(objectNode) + '.slice()');
            }

            context.report(problem);
        },
-   };
+   });
 };

See examples for more.

API

eslintTemplateVisitor(options?)

Craete a template visitor.

Example:

const eslintTemplateVisitor = require('eslint-template-visitor');

const templates = eslintTemplateVisitor();

options

Type: object

parserOptions

Options for the template parser. Passed down to babel-eslint.

Example:

const templates = eslintTemplateVisitor({
    parserOptions: {
        ecmaVersion: 2018,
    },
});

templates.variable()

Create a variable to be used in a template. Such a variable can match exactly one AST node.

templates.spreadVariable()

Create a spread variable. Spread variable can match an array of AST nodes.

This is useful for matching a number of arguments in a call or a number of statements in a block.

templates.variableDeclarationVariable()

Create a variable declaration variable. Variable declaration variable can match any type of variable declaration node.

This is useful for matching any variable declaration, be it const, let or var.

Use it in place of a variable declaration keyword:

const variableDeclarationVariable = templates.variableDeclarationVariable();

const template = templates.template`() => {
    ${variableDeclarationVariable} x = y;
}`;

templates.template tag

Creates a template possibly containing variables.

Example:

const objectVariable = templates.variable();
const argumentsVariable = templates.spreadVariable();

const substrCallTemplate = templates.template`${objectVariable}.substr(${argumentsVariable})`;

const create = () => templates.visitor({
    [substrCallTemplate](node) {
        // `node` here is the matching `.substr` call (i.e. `CallExpression`)
    }
});

templates.visitor({ /* visitors */ })

Used to merge template visitors with common ESLint visitors.

Example:

const create = () => templates.visitor({
    [substrCallTemplate](node) {
        // Template visitor
    },

    FunctionDeclaration(node) {
        // Simple node type visitor
    },

    'IfStatement > BlockStatement'(node) {
        // ESLint selector visitor
    },
});

template.context

A template match context. This property is defined only within a visitor call (in other words, only when working on a matching node).

Example:

const create = () => templates.visitor({
    [substrCallTemplate](node) {
        // `substrCallTemplate.context` can be used here
    },

    FunctionDeclaration(node) {
        // `substrCallTemplate.context` is not defined here, and it does not make sense to use it here,
        // since we `substrCallTemplate` did not match an AST node.
    },
});

template.context.getMatch(variable)

Used to get a match for a variable.

Example:

const objectVariable = templates.variable();
const argumentsVariable = templates.spreadVariable();

const substrCallTemplate = templates.template`${objectVariable}.substr(${argumentsVariable})`;

const create = () => templates.visitor({
    [substrCallTemplate](node) {
        const objectNode = substrCallTemplate.context.getMatch(objectVariable);

        // For example, let's check if `objectNode` is an `Identifier`: `objectNode.type === 'Identifier'`

        const argumentNodes = substrCallTemplate.context.getMatch(argumentsVariable);

        // `Array.isArray(argumentNodes) === true`
    },
});

template.narrow(selector, targetMatchIndex = 0)

Narrow the template to a part of the AST matching the selector.

Sometimes you can not define a wanted template at the top level due to JS syntax limitations. For example, you can’t have await or yield at the top level of a script.

Use a wrapper function in the template and then narrow it to a wanted AST node:

const template = templates.template`
    async () => { await 1; }
`.narrow('BlockStatement > :has(AwaitExpression)');

The template above is equivalent to this:

const template = templates.template`await 1`;

Except the latter can not be defined directly due to espree limitations.



pako

Build Status NPM version

zlib port to javascript, very fast!

Why pako is cool:

This project was done to understand how fast JS can be and is it necessary to develop native C modules for CPU-intensive tasks. Enjoy the result!

Famous projects, using pako:

Benchmarks:

node v0.10.26, 1mb sample:

   deflate-dankogai x 4.73 ops/sec ±0.82% (15 runs sampled)
   deflate-gildas x 4.58 ops/sec ±2.33% (15 runs sampled)
   deflate-imaya x 3.22 ops/sec ±3.95% (12 runs sampled)
 ! deflate-pako x 6.99 ops/sec ±0.51% (21 runs sampled)
   deflate-pako-string x 5.89 ops/sec ±0.77% (18 runs sampled)
   deflate-pako-untyped x 4.39 ops/sec ±1.58% (14 runs sampled)
 * deflate-zlib x 14.71 ops/sec ±4.23% (59 runs sampled)
   inflate-dankogai x 32.16 ops/sec ±0.13% (56 runs sampled)
   inflate-imaya x 30.35 ops/sec ±0.92% (53 runs sampled)
 ! inflate-pako x 69.89 ops/sec ±1.46% (71 runs sampled)
   inflate-pako-string x 19.22 ops/sec ±1.86% (49 runs sampled)
   inflate-pako-untyped x 17.19 ops/sec ±0.85% (32 runs sampled)
 * inflate-zlib x 70.03 ops/sec ±1.64% (81 runs sampled)

node v0.11.12, 1mb sample:

   deflate-dankogai x 5.60 ops/sec ±0.49% (17 runs sampled)
   deflate-gildas x 5.06 ops/sec ±6.00% (16 runs sampled)
   deflate-imaya x 3.52 ops/sec ±3.71% (13 runs sampled)
 ! deflate-pako x 11.52 ops/sec ±0.22% (32 runs sampled)
   deflate-pako-string x 9.53 ops/sec ±1.12% (27 runs sampled)
   deflate-pako-untyped x 5.44 ops/sec ±0.72% (17 runs sampled)
 * deflate-zlib x 14.05 ops/sec ±3.34% (63 runs sampled)
   inflate-dankogai x 42.19 ops/sec ±0.09% (56 runs sampled)
   inflate-imaya x 79.68 ops/sec ±1.07% (68 runs sampled)
 ! inflate-pako x 97.52 ops/sec ±0.83% (80 runs sampled)
   inflate-pako-string x 45.19 ops/sec ±1.69% (57 runs sampled)
   inflate-pako-untyped x 24.35 ops/sec ±2.59% (40 runs sampled)
 * inflate-zlib x 60.32 ops/sec ±1.36% (69 runs sampled)

zlib’s test is partially affected by marshalling (that make sense for inflate only). You can change deflate level to 0 in benchmark source, to investigate details. For deflate level 6 results can be considered as correct.

Install:

node.js:

npm install pako

browser:

bower install pako

Example & API

Full docs - http://nodeca.github.io/pako/

var pako = require('pako');

// Deflate
//
var input = new Uint8Array();
//... fill input data here
var output = pako.deflate(input);

// Inflate (simple wrapper can throw exception on broken stream)
//
var compressed = new Uint8Array();
//... fill data to uncompress here
try {
  var result = pako.inflate(compressed);
} catch (err) {
  console.log(err);
}

//
// Alternate interface for chunking & without exceptions
//

var inflator = new pako.Inflate();

inflator.push(chunk1, false);
inflator.push(chunk2, false);
...
inflator.push(chunkN, true); // true -> last chunk

if (inflator.err) {
  console.log(inflator.msg);
}

var output = inflator.result;

Sometime you can wish to work with strings. For example, to send big objects as json to server. Pako detects input data type. You can force output to be string with option { to: 'string' }.

var pako = require('pako');

var test = { my: 'super', puper: [456, 567], awesome: 'pako' };

var binaryString = pako.deflate(JSON.stringify(test), { to: 'string' });

//
// Here you can do base64 encode, make xhr requests and so on.
//

var restored = JSON.parse(pako.inflate(binaryString, { to: 'string' }));

Notes

Pako does not contain some specific zlib functions:

pako for enterprise

Available as part of the Tidelift Subscription

The maintainers of pako and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.

Authors

Personal thanks to:

Original implementation (in C):



Punycode.js Build status Code coverage status Dependency status

A robust Punycode converter that fully complies to RFC 3492 and RFC 5891, and works on nearly all JavaScript platforms.

This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:

This project is bundled with Node.js v0.6.2+ and io.js v1.0.0+.

Installation

Via npm (only required for Node.js releases older than v0.6.2):

npm install punycode

Via Bower:

bower install punycode

Via Component:

component install bestiejs/punycode.js

In a browser:

<script src="punycode.js"></script>

In Node.js, io.js, Narwhal, and RingoJS:

var punycode = require('punycode');

In Rhino:

load('punycode.js');

Using an AMD loader like RequireJS:

require(
  {
    'paths': {
      'punycode': 'path/to/punycode'
    }
  },
  ['punycode'],
  function(punycode) {
    console.log(punycode);
  }
);

API

punycode.decode(string)

Converts a Punycode string of ASCII symbols to a string of Unicode symbols.

// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'

punycode.encode(string)

Converts a string of Unicode symbols to a Punycode string of ASCII symbols.

// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'

punycode.toUnicode(input)

Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode.

// decode domain names
punycode.toUnicode('xn--maana-pta.com');
// → 'mañana.com'
punycode.toUnicode('xn----dqo34k.com');
// → '☃-⌘.com'

// decode email addresses
punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq');
// → 'джумла@джpумлатест.bрфa'

punycode.toASCII(input)

Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII.

// encode domain names
punycode.toASCII('mañana.com');
// → 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com');
// → 'xn----dqo34k.com'

// encode email addresses
punycode.toASCII('джумла@джpумлатест.bрфa');
// → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'

punycode.ucs2

punycode.ucs2.decode(string)

Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.

punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]

punycode.ucs2.encode(codePoints)

Creates a string based on an array of numeric code point values.

punycode.ucs2.encode([0x61, 0x62, 0x63]);
// → 'abc'
punycode.ucs2.encode([0x1D306]);
// → '\uD834\uDF06'

punycode.version

A string representing the current Punycode.js version number.

Unit tests & code coverage

After cloning this repository, run npm install --dev to install the dependencies needed for Punycode.js development and testing. You may want to install Istanbul globally using npm install istanbul -g.

Once that’s done, you can run the unit tests in Node using npm test or node tests/tests.js. To run the tests in Rhino, Ringo, Narwhal, PhantomJS, and web browsers as well, use grunt test.

To generate the code coverage report, use grunt cover.

Feel free to fork if you see possible improvements!

Author

twitter/mathias
Mathias Bynens
twitter/jdalton
John-David Dalton


@nodelib/fs.walk

A library for efficiently walking a directory recursively.

:bulb: Highlights

Install

npm install @nodelib/fs.walk

Usage

import * as fsWalk from '@nodelib/fs.walk';

fsWalk.walk('path', (error, entries) => { /* … */ });

API

.walk(path, optionsOrSettings, callback)

Reads the directory recursively and asynchronously. Requires a callback function.

:book: If you want to use the Promise API, use util.promisify.

fsWalk.walk('path', (error, entries) => { /* … */ });
fsWalk.walk('path', {}, (error, entries) => { /* … */ });
fsWalk.walk('path', new fsWalk.Settings(), (error, entries) => { /* … */ });

.walkStream(path, optionsOrSettings)

Reads the directory recursively and asynchronously. Readable Stream is used as a provider.

const stream = fsWalk.walkStream('path');
const stream = fsWalk.walkStream('path', {});
const stream = fsWalk.walkStream('path', new fsWalk.Settings());

.walkSync(path, optionsOrSettings)

Reads the directory recursively and synchronously. Returns an array of entries.

const entries = fsWalk.walkSync('path');
const entries = fsWalk.walkSync('path', {});
const entries = fsWalk.walkSync('path', new fsWalk.Settings());

path

A path to a file. If a URL is provided, it must use the file: protocol.

optionsOrSettings

An Options object or an instance of Settings class.

:book: When you pass a plain object, an instance of the Settings class will be created automatically. If you plan to call the method frequently, use a pre-created instance of the Settings class.

Settings(options)

A class of full settings of the package.

const settings = new fsWalk.Settings({ followSymbolicLinks: true });

const entries = fsWalk.walkSync('path', settings);

Entry

Options

basePath

By default, all paths are built relative to the root path. You can use this option to set custom root path.

In the example below we read the files from the root directory, but in the results the root path will be custom.

fsWalk.walkSync('root'); // → ['root/file.txt']
fsWalk.walkSync('root', { basePath: 'custom' }); // → ['custom/file.txt']

concurrency

The maximum number of concurrent calls to fs.readdir.

:book: The higher the number, the higher performance and the load on the File System. If you want to read in quiet mode, set the value to 4 * os.cpus().length (4 is default size of thread pool work scheduling).

deepFilter

A function that indicates whether the directory will be read deep or not.

// Skip all directories that starts with `node_modules`
const filter: DeepFilterFunction = (entry) => !entry.path.startsWith('node_modules');

entryFilter

A function that indicates whether the entry will be included to results or not.

// Exclude all `.js` files from results
const filter: EntryFilterFunction = (entry) => !entry.name.endsWith('.js');

errorFilter

A function that allows you to skip errors that occur when reading directories.

For example, you can skip ENOENT errors if required:

// Skip all ENOENT errors
const filter: ErrorFilterFunction = (error) => error.code == 'ENOENT';

stats

Adds an instance of fs.Stats class to the Entry.

:book: Always use fs.readdir with additional fs.lstat/fs.stat calls to determine the entry type.

Follow symbolic links or not. Call fs.stat on symbolic link if true.

Throw an error when symbolic link is broken if true or safely return lstat call if false.

pathSegmentSeparator

By default, this package uses the correct path separator for your OS (\ on Windows, / on Unix-like systems). But you can set this option to any separator character(s) that you want to use instead.

fs

By default, the built-in Node.js module (fs) is used to work with the file system. You can replace any method with your own.

interface FileSystemAdapter {
    lstat: typeof fs.lstat;
    stat: typeof fs.stat;
    lstatSync: typeof fs.lstatSync;
    statSync: typeof fs.statSync;
    readdir: typeof fs.readdir;
    readdirSync: typeof fs.readdirSync;
}

const settings = new fsWalk.Settings({
    fs: { lstat: fakeLstat }
});

Changelog

See the Releases section of our GitHub project for changelog for each release version.



eslint-plugin-promise

Enforce best practices for JavaScript promises.

travis-ci npm version code style: prettier

Installation

You’ll first need to install ESLint:

npm install eslint --save-dev

Next, install eslint-plugin-promise:

npm install eslint-plugin-promise --save-dev

Note: If you installed ESLint globally (using the -g flag) then you must also install eslint-plugin-promise globally.

Usage

Add promise to the plugins section of your .eslintrc.json configuration file. You can omit the eslint-plugin- prefix:

{
  "plugins": ["promise"]
}

Then configure the rules you want to use under the rules section.

{
  "rules": {
    "promise/always-return": "error",
    "promise/no-return-wrap": "error",
    "promise/param-names": "error",
    "promise/catch-or-return": "error",
    "promise/no-native": "off",
    "promise/no-nesting": "warn",
    "promise/no-promise-in-callback": "warn",
    "promise/no-callback-in-promise": "warn",
    "promise/avoid-new": "warn",
    "promise/no-new-statics": "error",
    "promise/no-return-in-finally": "warn",
    "promise/valid-params": "warn"
  }
}

or start with the recommended rule set:

{
  "extends": ["plugin:promise/recommended"]
}

Rules

rule description recommended fixable
catch-or-return Enforces the use of catch() on un-returned promises. :bangbang:
no-return-wrap Avoid wrapping values in Promise.resolve or Promise.reject when not needed. :bangbang:
param-names Enforce consistent param names and ordering when creating new promises. :bangbang:
always-return Return inside each then() to create readable and reusable Promise chains. :bangbang:
no-native In an ES5 environment, make sure to create a Promise constructor before using.
no-nesting Avoid nested then() or catch() statements :warning:
no-promise-in-callback Avoid using promises inside of callbacks :warning:
no-callback-in-promise Avoid calling cb() inside of a then() (use nodeify instead) :warning:
avoid-new Avoid creating new promises outside of utility libs (use pify instead)
no-new-statics Avoid calling new on a Promise static method :bangbang: :wrench:
no-return-in-finally Disallow return statements in finally() :warning:
valid-params Ensures the proper number of arguments are passed to Promise functions :warning:
prefer-await-to-then Prefer await to then() for reading Promise values :seven:
prefer-await-to-callbacks Prefer async/await to the callback pattern :seven:

Key

icon description
:bangbang: Reports as error in recommended configuration
:warning: Reports as warning in recommended configuration
:seven: ES2017 Async Await rules
:wrench: Rule is fixable with eslint --fix

Maintainers

[@macklinu]: https://github.com/macklinu [@xjamundx]: https://github.com/xjamundx



URI.js

URI.js is an RFC 3986 compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI (RFC 3987), IDNA (RFC 5890), IPv6 Address (RFC 5952), IPv6 Zone Identifier (RFC 6874) specifications.

URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated).

API

Parsing

URI.parse(“uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body”); //returns: //{ // scheme : “uri”, // userinfo : “user:pass”, // host : “example.com”, // port : 123, // path : “/one/two.three”, // query : “q1=a1&q2=a2”, // fragment : “body” //}

Serializing

URI.serialize({scheme : “http”, host : “example.com”, fragment : “footer”}) === “http://example.com/#footer”

Resolving

URI.resolve(“uri://a/b/c/d?q”, “../../g”) === “uri://a/g”

Normalizing

URI.normalize(“HTTP://ABC.com:80/%7Esmith/home.html”) === “http://abc.com/~smith/home.html”

Comparison

URI.equal(“example://a/b/c/%7Bfoo%7D”, “eXAMPLE://a/./b/../b/%63/%7bfoo%7d”) === true

//IPv4 normalization URI.normalize(“//192.068.001.000”) === “//192.68.1.0”

//IPv6 normalization URI.normalize(“//[2001:0:0DB8::0:0001]”) === “//[2001:0:db8::1]”

//IPv6 zone identifier support URI.parse(“//[2001:db8::7%25en1]”); //returns: //{ // host : “2001:db8::7%en1” //}

//convert IRI to URI URI.serialize(URI.parse(“http://examplé.org/rosé”)) === “http://xn–exampl-gva.org/ros%C3%A9” //convert URI to IRI URI.serialize(URI.parse(“http://xn–exampl-gva.org/ros%C3%A9”), {iri:true}) === “http://examplé.org/rosé”

Options

All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties:

Scheme Extendable

URI.js supports inserting custom scheme dependent processing rules. Currently, URI.js has built in support for the following schemes:

URI.equal(“HTTP://ABC.COM:80”, “http://abc.com/”) === true URI.equal(“https://abc.com”, “HTTPS://ABC.COM:443/”) === true

URI.parse(“wss://example.com/foo?bar=baz”); //returns: //{ // scheme : “wss”, // host: “example.com”, // resourceName: “/foo?bar=baz”, // secure: true, //}

URI.equal(“WS://ABC.COM:80/chat#one”, “ws://abc.com/chat”) === true

URI.parse(“mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!”); //returns: //{ // scheme : “mailto”, // to : [“alpha@example.com”, “bravo@example.com”], // subject : “SUBSCRIBE”, // body : “Sign me up!” //}

URI.serialize({ scheme : “mailto”, to : [“alpha@example.com”], subject : “REMOVE”, body : “Please remove me”, headers : { cc : “charlie@example.com” } }) === “mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me”

URI.parse(“urn:example:foo”); //returns: //{ // scheme : “urn”, // nid : “example”, // nss : “foo”, //}

URI.parse(“urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6”); //returns: //{ // scheme : “urn”, // nid : “example”, // uuid : “f81d4fae-7dec-11d0-a765-00a0c91e6bf6”, //}

Usage

To load in a browser, use the following tag:

To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line:

npm install uri-js # OR yarn add uri-js

Then, in your code, load it using:

const URI = require(“uri-js”);

If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using:

import * as URI from “uri-js”;

Or you can load just what you need using named exports:

import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from “uri-js”;

Breaking changes

Breaking changes from 3.x

URN parsing has been completely changed to better align with the specification. Scheme is now always urn, but has two new properties: nid which contains the Namspace Identifier, and nss which contains the Namespace Specific String. The nss property will be removed by higher order scheme handlers, such as the UUID URN scheme handler.

The UUID of a URN can now be found in the uuid property.

Breaking changes from 2.x

URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful.

Breaking changes from 1.x

The errors array on parsed components is now an error string.



minimatch

A minimal matching utility.

Build Status

This is the matching library used internally by npm.

It works by converting glob expressions into JavaScript RegExp objects.

Usage

var minimatch = require("minimatch")

minimatch("bar.foo", "*.foo") // true!
minimatch("bar.foo", "*.bar") // false!
minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy!

Features

See:

Minimatch Class

Create a minimatch object by instantiating the minimatch.Minimatch class.

var Minimatch = require("minimatch").Minimatch
var mm = new Minimatch(pattern, options)

Properties

Methods

All other methods are internal, and will be called as necessary.

minimatch(path, pattern, options)

Main export. Tests a path against the pattern using the options.

var isJS = minimatch(file, "*.js", { matchBase: true })

minimatch.filter(pattern, options)

Returns a function that tests its supplied argument, suitable for use with Array.filter. Example:

var javascripts = fileList.filter(minimatch.filter("*.js", {matchBase: true}))

minimatch.match(list, pattern, options)

Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself.

var javascripts = minimatch.match(fileList, "*.js", {matchBase: true}))

minimatch.makeRe(pattern, options)

Make a regular expression object from the pattern.

Options

All options are false by default.

debug

Dump a ton of stuff to stderr.

nobrace

Do not expand {a,b} and {1..3} brace sets.

noglobstar

Disable ** matching against multiple folder names.

dot

Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot.

Note that by default, a/**/b will not match a/.d/b, unless dot is set.

noext

Disable “extglob” style patterns like +(a|b).

nocase

Perform a case-insensitive match.

nonull

When a match is not found by minimatch.match, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches.

matchBase

If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.

nocomment

Suppress the behavior of treating # at the start of a pattern as a comment.

nonegate

Suppress the behavior of treating a leading ! character as negation.

flipNegate

Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.)

Comparisons to other fnmatch/glob implementations

While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional.

If the pattern starts with a ! character, then it is negated. Set the nonegate flag to suppress this behavior, and treat leading ! characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like !(a|B). Multiple ! characters at the start of a pattern will negate the pattern multiple times.

If a pattern starts with #, then it is treated as a comment, and will not match anything. Use \# to match a literal # at the start of a line, or set the nocomment flag to suppress this behavior.

The double-star character ** is supported by default, unless the noglobstar flag is set. This is supported in the manner of bsdglob and bash 4.1, where ** only has special significance if it is the only thing in a path part. That is, a/**/b will match a/x/y/b, but a/**b will not.

If an escaped pattern has no matches, and the nonull flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, minimatch.match([], "\\*a\\?") will return "\\*a\\?" rather than "*a?". This is akin to setting the nullglob option in bash, except that it does not resolve escaped pattern characters.

If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like +(a|{b),c)}, which would not be valid in bash or zsh, is expanded first into the set of +(a|b) and +(a|c), and those patterns are checked for validity. Since those two are valid, matching proceeds.



raw-body

NPM Version NPM Downloads Node.js Version Build status Test coverage

Gets the entire buffer of a stream either as a Buffer or a string. Validates the stream’s length against an expected length and maximum limit. Ideal for parsing request bodies.

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install raw-body

TypeScript

This module includes a TypeScript declaration file to enable auto complete in compatible editors and type information for TypeScript projects. This module depends on the Node.js types, so install @types/node:

$ npm install @types/node

API

var getRawBody = require('raw-body')

getRawBody(stream, options, callback)

Returns a promise if no callback specified and global Promise exists.

Options:

You can also pass a string in place of options to just specify the encoding.

If an error occurs, the stream will be paused, everything unpiped, and you are responsible for correctly disposing the stream. For HTTP requests, no handling is required if you send a response. For streams that use file descriptors, you should stream.destroy() or stream.close() to prevent leaks.

Errors

This module creates errors depending on the error condition during reading. The error may be an error from the underlying Node.js implementation, but is otherwise an error created by this module, which has the following attributes:

Types

The errors from this module have a type property which allows for the progamatic determination of the type of error returned.

encoding.unsupported

This error will occur when the encoding option is specified, but the value does not map to an encoding supported by the iconv-lite module.

entity.too.large

This error will occur when the limit option is specified, but the stream has an entity that is larger.

request.aborted

This error will occur when the request stream is aborted by the client before reading the body has finished.

request.size.invalid

This error will occur when the length option is specified, but the stream has emitted more bytes.

stream.encoding.set

This error will occur when the given stream has an encoding set on it, making it a decoded stream. The stream should not have an encoding set and is expected to emit Buffer objects.

Examples

Simple Express example

var contentType = require('content-type')
var express = require('express')
var getRawBody = require('raw-body')

var app = express()

app.use(function (req, res, next) {
  getRawBody(req, {
    length: req.headers['content-length'],
    limit: '1mb',
    encoding: contentType.parse(req).parameters.charset
  }, function (err, string) {
    if (err) return next(err)
    req.text = string
    next()
  })
})

// now access req.text

Simple Koa example

var contentType = require('content-type')
var getRawBody = require('raw-body')
var koa = require('koa')

var app = koa()

app.use(function * (next) {
  this.text = yield getRawBody(this.req, {
    length: this.req.headers['content-length'],
    limit: '1mb',
    encoding: contentType.parse(this.req).parameters.charset
  })
  yield next
})

// now access this.text

Using as a promise

To use this library as a promise, simply omit the callback and a promise is returned, provided that a global Promise is defined.

var getRawBody = require('raw-body')
var http = require('http')

var server = http.createServer(function (req, res) {
  getRawBody(req)
    .then(function (buf) {
      res.statusCode = 200
      res.end(buf.length + ' bytes submitted')
    })
    .catch(function (err) {
      res.statusCode = 500
      res.end(err.message)
    })
})

server.listen(3000)

Using with TypeScript

import * as getRawBody from 'raw-body';
import * as http from 'http';

const server = http.createServer((req, res) => {
  getRawBody(req)
  .then((buf) => {
    res.statusCode = 200;
    res.end(buf.length + ' bytes submitted');
  })
  .catch((err) => {
    res.statusCode = err.statusCode;
    res.end(err.message);
  });
});

server.listen(3000);


Node.js - jsonfile

Easily read/write JSON files in Node.js. Note: this module cannot be used in the browser.

npm Package build status windows Build status

Standard JavaScript

Why?

Writing JSON.stringify() and then fs.writeFile() and JSON.parse() with fs.readFile() enclosed in try/catch blocks became annoying.

Installation

npm install –save jsonfile

API


readFile(filename, options, callback)

options (object, default undefined): Pass in any fs.readFile options or set reviver for a JSON reviver. - throws (boolean, default: true). If JSON.parse throws an error, pass this error to the callback. If false, returns null for the object.

const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
jsonfile.readFile(file, function (err, obj) {
  if (err) console.error(err)
  console.dir(obj)
})

You can also use this method with promises. The readFile method will return a promise if you do not pass a callback function.

const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
jsonfile.readFile(file)
  .then(obj => console.dir(obj))
  .catch(error => console.error(error))

readFileSync(filename, options)

options (object, default undefined): Pass in any fs.readFileSync options or set reviver for a JSON reviver. - throws (boolean, default: true). If an error is encountered reading or parsing the file, throw the error. If false, returns null for the object.

const jsonfile = require('jsonfile')
const file = '/tmp/data.json'

console.dir(jsonfile.readFileSync(file))

writeFile(filename, obj, options, callback)

options: Pass in any fs.writeFile options or set replacer for a JSON replacer. Can also pass in spaces, or override EOL string or set finalEOL flag as false to not save the file with EOL at the end.

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFile(file, obj, function (err) {
  if (err) console.error(err)
})

Or use with promises as follows:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFile(file, obj)
  .then(res => {
    console.log('Write complete')
  })
  .catch(error => console.error(error))

formatting with spaces:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFile(file, obj, { spaces: 2 }, function (err) {
  if (err) console.error(err)
})

overriding EOL:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFile(file, obj, { spaces: 2, EOL: '\r\n' }, function (err) {
  if (err) console.error(err)
})

disabling the EOL at the end of file:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFile(file, obj, { spaces: 2, finalEOL: false }, function (err) {
  if (err) console.log(err)
})

appending to an existing JSON file:

You can use fs.writeFile option { flag: 'a' } to achieve this.

const jsonfile = require('jsonfile')

const file = '/tmp/mayAlreadyExistedData.json'
const obj = { name: 'JP' }

jsonfile.writeFile(file, obj, { flag: 'a' }, function (err) {
  if (err) console.error(err)
})

writeFileSync(filename, obj, options)

options: Pass in any fs.writeFileSync options or set replacer for a JSON replacer. Can also pass in spaces, or override EOL string or set finalEOL flag as false to not save the file with EOL at the end.

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFileSync(file, obj)

formatting with spaces:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFileSync(file, obj, { spaces: 2 })

overriding EOL:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFileSync(file, obj, { spaces: 2, EOL: '\r\n' })

disabling the EOL at the end of file:

const jsonfile = require('jsonfile')

const file = '/tmp/data.json'
const obj = { name: 'JP' }

jsonfile.writeFileSync(file, obj, { spaces: 2, finalEOL: false })

appending to an existing JSON file:

You can use fs.writeFileSync option { flag: 'a' } to achieve this.

const jsonfile = require('jsonfile')

const file = '/tmp/mayAlreadyExistedData.json'
const obj = { name: 'JP' }

jsonfile.writeFileSync(file, obj, { flag: 'a' })


set-value NPM version NPM monthly downloads NPM total downloads Linux Build Status

Create nested values and any intermediaries using dot notation ('a.b.c') paths.

Install

Install with npm:

$ npm install --save set-value

Usage

var set = require('set-value');
set(object, prop, value);

Params

Examples

Updates and returns the given object:

var obj = {};
set(obj, 'a.b.c', 'd');
console.log(obj);
//=> { a: { b: { c: 'd' } } }

Escaping

Escaping with backslashes

Prevent set-value from splitting on a dot by prefixing it with backslashes:

console.log(set({}, 'a\\.b.c', 'd'));
//=> { 'a.b': { c: 'd' } }

console.log(set({}, 'a\\.b\\.c', 'd'));
//=> { 'a.b.c': 'd' }

Escaping with double-quotes or single-quotes

Wrap double or single quotes around the string, or part of the string, that should not be split by set-value:

console.log(set({}, '"a.b".c', 'd'));
//=> { 'a.b': { c: 'd' } }

console.log(set({}, "'a.b'.c", "d"));
//=> { 'a.b': { c: 'd' } }

console.log(set({}, '"this/is/a/.file.path"', 'd'));
//=> { 'this/is/a/file.path': 'd' }

Bracket support

set-value does not split inside brackets or braces:

console.log(set({}, '[a.b].c', 'd'));
//=> { '[a.b]': { c: 'd' } }

console.log(set({}, "(a.b).c", "d"));
//=> { '(a.b)': { c: 'd' } }

console.log(set({}, "<a.b>.c", "d"));
//=> { '<a.b>': { c: 'd' } }

console.log(set({}, "{a..b}.c", "d"));
//=> { '{a..b}': { c: 'd' } }

History

v2.0.0

If there are any regressions please create a bug report. Thanks!

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
59 jonschlinkert
1 vadimdemedes
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

$ npm install -g verbose/verb#dev verb-generate-readme && verb

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

$ npm install && npm test

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on June 21, 2017. aws4 —-

Build Status

A small utility to sign vanilla Node.js http(s) request options using Amazon’s AWS Signature Version 4.

If you want to sign and send AWS requests in a modern browser, or an environment like Cloudflare Workers, then check out aws4fetch – otherwise you can also bundle this library for use in older browsers.

The only AWS service that doesn’t support v4 as of 2020-05-22 is SimpleDB (it only supports AWS Signature Version 2).

It also provides defaults for a number of core AWS headers and request parameters, making it very easy to query AWS services, or build out a fully-featured AWS library.

Example

var https = require('https')
var aws4  = require('aws4')

// to illustrate usage, we'll create a utility function to request and pipe to stdout
function request(opts) { https.request(opts, function(res) { res.pipe(process.stdout) }).end(opts.body || '') }

// aws4 will sign an options object as you'd pass to http.request, with an AWS service and region
var opts = { host: 'my-bucket.s3.us-west-1.amazonaws.com', path: '/my-object', service: 's3', region: 'us-west-1' }

// aws4.sign() will sign and modify these options, ready to pass to http.request
aws4.sign(opts, { accessKeyId: '', secretAccessKey: '' })

// or it can get credentials from process.env.AWS_ACCESS_KEY_ID, etc
aws4.sign(opts)

// for most AWS services, aws4 can figure out the service and region if you pass a host
opts = { host: 'my-bucket.s3.us-west-1.amazonaws.com', path: '/my-object' }

// usually it will add/modify request headers, but you can also sign the query:
opts = { host: 'my-bucket.s3.amazonaws.com', path: '/?X-Amz-Expires=12345', signQuery: true }

// and for services with simple hosts, aws4 can infer the host from service and region:
opts = { service: 'sqs', region: 'us-east-1', path: '/?Action=ListQueues' }

// and if you're using us-east-1, it's the default:
opts = { service: 'sqs', path: '/?Action=ListQueues' }

aws4.sign(opts)
console.log(opts)
/*
{
  host: 'sqs.us-east-1.amazonaws.com',
  path: '/?Action=ListQueues',
  headers: {
    Host: 'sqs.us-east-1.amazonaws.com',
    'X-Amz-Date': '20121226T061030Z',
    Authorization: 'AWS4-HMAC-SHA256 Credential=ABCDEF/20121226/us-east-1/sqs/aws4_request, ...'
  }
}
*/

// we can now use this to query AWS
request(opts)
/*
<?xml version="1.0"?>
<ListQueuesResponse xmlns="https://queue.amazonaws.com/doc/2012-11-05/">
...
*/

// aws4 can infer the HTTP method if a body is passed in
// method will be POST and Content-Type: 'application/x-www-form-urlencoded; charset=utf-8'
request(aws4.sign({ service: 'iam', body: 'Action=ListGroups&Version=2010-05-08' }))
/*
<ListGroupsResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/">
...
*/

// you can specify any custom option or header as per usual
request(aws4.sign({
  service: 'dynamodb',
  region: 'ap-southeast-2',
  method: 'POST',
  path: '/',
  headers: {
    'Content-Type': 'application/x-amz-json-1.0',
    'X-Amz-Target': 'DynamoDB_20120810.ListTables'
  },
  body: '{}'
}))
/*
{"TableNames":[]}
...
*/

// The raw RequestSigner can be used to generate CodeCommit Git passwords
var signer = new aws4.RequestSigner({
  service: 'codecommit',
  host: 'git-codecommit.us-east-1.amazonaws.com',
  method: 'GIT',
  path: '/v1/repos/MyAwesomeRepo',
})
var password = signer.getDateTime() + 'Z' + signer.signature()

// see example.js for examples with other services

API

aws4.sign(requestOptions, [credentials])

Calculates and populates any necessary AWS headers and/or request options on requestOptions. Returns requestOptions as a convenience for chaining.

requestOptions is an object holding the same options that the Node.js http.request function takes.

The following properties of requestOptions are used in the signing or populated if they don’t already exist:

Your AWS credentials (which can be found in your AWS console) can be specified in one of two ways:

aws4.sign(requestOptions, {
  secretAccessKey: "<your-secret-access-key>",
  accessKeyId: "<your-access-key-id>",
  sessionToken: "<your-session-token>"
})
export AWS_ACCESS_KEY_ID="<your-access-key-id>"
export AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
export AWS_SESSION_TOKEN="<your-session-token>"

(will also use AWS_ACCESS_KEY and AWS_SECRET_KEY if available)

The sessionToken property and AWS_SESSION_TOKEN environment variable are optional for signing with IAM STS temporary credentials.

Installation

With npm do:

npm install aws4

Can also be used in the browser.

Thanks

Thanks to [@jed](https://github.com/jed) for his dynamo-client lib where I first committed and subsequently extracted this code.

Also thanks to the official Node.js AWS SDK for giving me a start on implementing the v4 signature.



espurify

Clone new AST without extra properties

API

var purifiedAstClone = espurify(originalAst)

Returns new clone of originalAst but without extra properties.

Leaves properties defined in The ESTree Spec (formerly known as Mozilla SpiderMonkey Parser API) only. Also note that extra informations (such as loc, range and raw) are eliminated too.

var customizedCloneFunctionWithWhiteList = espurify.cloneWithWhitelist(whiteList)

Returns customized function for cloning AST, with user-provided whiteList.

var purifiedAstClone = customizedCloneFunctionWithWhiteList(originalAst)

Returns new clone of originalAst by customized function.

whiteList

type default value
object N/A

whiteList is an object containing NodeType as keys and properties as values.

{
    ArrayExpression: ['type', 'elements'],
    ArrayPattern: ['type', 'elements'],
    ArrowFunctionExpression: ['type', 'id', 'params', 'body', 'generator', 'expression'],
    AssignmentExpression: ['type', 'operator', 'left', 'right'],
    ...

var customizedCloneFunction = espurify.customize(options)

Returns customized function for cloning AST, configured by custom options.

var purifiedAstClone = customizedCloneFunction(originalAst)

Returns new clone of originalAst by customized function.

options

type default value
object {}

Configuration options. If not passed, default options will be used.

options.extra

type default value
array of string null

List of extra properties to be left in result AST. For example, functions returned by espurify.customize({extra: ['raw']}) will preserve raw properties of Literal. Functions return by espurify.customize({extra: ['loc', 'range']}) will preserve loc and range properties of each Node.

EXAMPLE

var espurify = require('espurify'),
    estraverse = require('estraverse'),
    esprima = require('esprima'),
    syntax = estraverse.Syntax,
    assert = require('assert');

var jsCode = 'assert("foo")';

// Adding extra informations to AST
var originalAst = esprima.parse(jsCode, {tolerant: true, loc: true, raw: true});
estraverse.replace(originalAst, {
    leave: function (currentNode, parentNode) {
        if (currentNode.type === syntax.Literal && typeof currentNode.raw !== 'undefined') {
            currentNode['x-verbatim-bar'] = {
                content : currentNode.raw,
                precedence : 18  // escodegen.Precedence.Primary
            };
            return currentNode;
        } else {
            return undefined;
        }
    }
});


// purify AST
var purifiedClone = espurify(originalAst);


// original AST is not modified
assert.deepEqual(originalAst, {
  type: 'Program',
  body: [
    {
      type: 'ExpressionStatement',
      expression: {
        type: 'CallExpression',
        callee: {
          type: 'Identifier',
          name: 'assert',
          loc: {
            start: {
              line: 1,
              column: 0
            },
            end: {
              line: 1,
              column: 6
            }
          }
        },
        arguments: [
          {
            type: 'Literal',
            value: 'foo',
            raw: '"foo"',
            loc: {
              start: {
                line: 1,
                column: 7
              },
              end: {
                line: 1,
                column: 12
              }
            },
            "x-verbatim-bar": {
              content: '"foo"',
              precedence: 18
            }
          }
        ],
        loc: {
          start: {
            line: 1,
            column: 0
          },
          end: {
            line: 1,
            column: 13
          }
        }
      },
      loc: {
        start: {
          line: 1,
          column: 0
        },
        end: {
          line: 1,
          column: 13
        }
      }
    }
  ],
  loc: {
    start: {
      line: 1,
      column: 0
    },
    end: {
      line: 1,
      column: 13
    }
  },
  errors: []
});


// Extra properties are eliminated from cloned AST
assert.deepEqual(purifiedClone, {
    type: 'Program',
    body: [
        {
            type: 'ExpressionStatement',
            expression: {
                type: 'CallExpression',
                callee: {
                    type: 'Identifier',
                    name: 'assert'
                },
                arguments: [
                    {
                        type: 'Literal',
                        value: 'foo'
                    }
                ]
            }
        }
    ]
});

INSTALL

via npm

Install

npm install –save espurify

Use

var espurify = require('espurify');

AUTHOR

CONTRIBUTORS



is-number NPM version NPM monthly downloads NPM total downloads Linux Build Status

Returns true if the value is a finite number.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-number

Why is this needed?

In JavaScript, it’s not always as straightforward as it should be to reliably check if a value is a number. It’s common for devs to use +, -, or Number() to cast a string value to a number (for example, when values are returned from user input, regex matches, parsers, etc). But there are many non-intuitive edge cases that yield unexpected results:

console.log(+[]); //=> 0
console.log(+''); //=> 0
console.log(+'   '); //=> 0
console.log(typeof NaN); //=> 'number'

This library offers a performant way to smooth out edge cases like these.

Usage

const isNumber = require('is-number');

See the tests for more examples.

true

isNumber(5e3);               // true
isNumber(0xff);              // true
isNumber(-1.1);              // true
isNumber(0);                 // true
isNumber(1);                 // true
isNumber(1.1);               // true
isNumber(10);                // true
isNumber(10.10);             // true
isNumber(100);               // true
isNumber('-1.1');            // true
isNumber('0');               // true
isNumber('012');             // true
isNumber('0xff');            // true
isNumber('1');               // true
isNumber('1.1');             // true
isNumber('10');              // true
isNumber('10.10');           // true
isNumber('100');             // true
isNumber('5e3');             // true
isNumber(parseInt('012'));   // true
isNumber(parseFloat('012')); // true

False

Everything else is false, as you would expect:

isNumber(Infinity);          // false
isNumber(NaN);               // false
isNumber(null);              // false
isNumber(undefined);         // false
isNumber('');                // false
isNumber('   ');             // false
isNumber('foo');             // false
isNumber([1]);               // false
isNumber([]);                // false
isNumber(function () {});    // false
isNumber({});                // false

Release history

7.0.0

6.0.0

5.0.0

Breaking changes

Benchmarks

As with all benchmarks, take these with a grain of salt. See the benchmarks for more detail.

# all
v7.0 x 413,222 ops/sec ±2.02% (86 runs sampled)
v6.0 x 111,061 ops/sec ±1.29% (85 runs sampled)
parseFloat x 317,596 ops/sec ±1.36% (86 runs sampled)
fastest is 'v7.0'

# string
v7.0 x 3,054,496 ops/sec ±1.05% (89 runs sampled)
v6.0 x 2,957,781 ops/sec ±0.98% (88 runs sampled)
parseFloat x 3,071,060 ops/sec ±1.13% (88 runs sampled)
fastest is 'parseFloat,v7.0'

# number
v7.0 x 3,146,895 ops/sec ±0.89% (89 runs sampled)
v6.0 x 3,214,038 ops/sec ±1.07% (89 runs sampled)
parseFloat x 3,077,588 ops/sec ±1.07% (87 runs sampled)
fastest is 'v6.0'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
49 jonschlinkert
5 charlike-old
1 benaadams
1 realityking

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on June 15, 2018. ## Pure JS character encoding conversion Build Status

NPM Stats

Usage

Basic API

var iconv = require('iconv-lite');

// Convert from an encoded buffer to js string.
str = iconv.decode(Buffer.from([0x68, 0x65, 0x6c, 0x6c, 0x6f]), 'win1251');

// Convert from js string to an encoded buffer.
buf = iconv.encode("Sample input string", 'win1251');

// Check if encoding is supported
iconv.encodingExists("us-ascii")

Streaming API (Node v0.10+)


// Decode stream (from binary stream to js strings)
http.createServer(function(req, res) {
    var converterStream = iconv.decodeStream('win1251');
    req.pipe(converterStream);

    converterStream.on('data', function(str) {
        console.log(str); // Do something with decoded strings, chunk-by-chunk.
    });
});

// Convert encoding streaming example
fs.createReadStream('file-in-win1251.txt')
    .pipe(iconv.decodeStream('win1251'))
    .pipe(iconv.encodeStream('ucs2'))
    .pipe(fs.createWriteStream('file-in-ucs2.txt'));

// Sugar: all encode/decode streams have .collect(cb) method to accumulate data.
http.createServer(function(req, res) {
    req.pipe(iconv.decodeStream('win1251')).collect(function(err, body) {
        assert(typeof body == 'string');
        console.log(body); // full request body string
    });
});

[Deprecated] Extend Node.js own encodings

NOTE: This doesn’t work on latest Node versions. See details.

// After this call all Node basic primitives will understand iconv-lite encodings.
iconv.extendNodeEncodings();

// Examples:
buf = new Buffer(str, 'win1251');
buf.write(str, 'gbk');
str = buf.toString('latin1');
assert(Buffer.isEncoding('iso-8859-15'));
Buffer.byteLength(str, 'us-ascii');

http.createServer(function(req, res) {
    req.setEncoding('big5');
    req.collect(function(err, body) {
        console.log(body);
    });
});

fs.createReadStream("file.txt", "shift_jis");

// External modules are also supported (if they use Node primitives, which they probably do).
request = require('request');
request({
    url: "http://github.com/", 
    encoding: "cp932"
});

// To remove extensions
iconv.undoExtendNodeEncodings();

Most singlebyte encodings are generated automatically from node-iconv. Thank you Ben Noordhuis and libiconv authors!

Multibyte encodings are generated from Unicode.org mappings and WHATWG Encoding Standard mappings. Thank you, respective authors!

Encoding/decoding speed

Comparison with node-iconv module (1000x256kb, on MacBook Pro, Core i5/2.6 GHz, Node v0.12.0). Note: your results may vary, so please always check on your hardware.

operation iconv@2.1.4 iconv-lite@0.4.7 ———————————————————- encode(‘win1251’) ~96 Mb/s ~320 Mb/s decode(‘win1251’) ~95 Mb/s ~246 Mb/s

BOM handling

UTF-16 Encodings

This library supports UTF-16LE, UTF-16BE and UTF-16 encodings. First two are straightforward, but UTF-16 is trying to be smart about endianness in the following ways: * Decoding: uses BOM and ‘spaces heuristic’ to determine input endianness. Default is UTF-16LE, but can be overridden with defaultEncoding: 'utf-16be' option. Strips BOM unless stripBOM: false. * Encoding: uses UTF-16LE and writes BOM by default. Use addBOM: false to override.

Other notes

When decoding, be sure to supply a Buffer to decode() method, otherwise bad things usually happen.
Untranslatable characters are set to � or ?. No transliteration is currently supported.
Node versions 0.10.31 and 0.11.13 are buggy, don’t use them (see #65, #77).

Testing

$ git clone git@github.com:ashtuchkin/iconv-lite.git
$ cd iconv-lite
$ npm install
$ npm test
    
$ # To view performance:
$ node test/performance.js

$ # To view test coverage:
$ npm run coverage
$ open coverage/lcov-report/index.html

Pure JS character encoding conversion Build Status

NPM Stats

Usage

Basic API

var iconv = require('iconv-lite');

// Convert from an encoded buffer to js string.
str = iconv.decode(Buffer.from([0x68, 0x65, 0x6c, 0x6c, 0x6f]), 'win1251');

// Convert from js string to an encoded buffer.
buf = iconv.encode("Sample input string", 'win1251');

// Check if encoding is supported
iconv.encodingExists("us-ascii")

Streaming API (Node v0.10+)


// Decode stream (from binary stream to js strings)
http.createServer(function(req, res) {
    var converterStream = iconv.decodeStream('win1251');
    req.pipe(converterStream);

    converterStream.on('data', function(str) {
        console.log(str); // Do something with decoded strings, chunk-by-chunk.
    });
});

// Convert encoding streaming example
fs.createReadStream('file-in-win1251.txt')
    .pipe(iconv.decodeStream('win1251'))
    .pipe(iconv.encodeStream('ucs2'))
    .pipe(fs.createWriteStream('file-in-ucs2.txt'));

// Sugar: all encode/decode streams have .collect(cb) method to accumulate data.
http.createServer(function(req, res) {
    req.pipe(iconv.decodeStream('win1251')).collect(function(err, body) {
        assert(typeof body == 'string');
        console.log(body); // full request body string
    });
});

[Deprecated] Extend Node.js own encodings

NOTE: This doesn’t work on latest Node versions. See details.

// After this call all Node basic primitives will understand iconv-lite encodings.
iconv.extendNodeEncodings();

// Examples:
buf = new Buffer(str, 'win1251');
buf.write(str, 'gbk');
str = buf.toString('latin1');
assert(Buffer.isEncoding('iso-8859-15'));
Buffer.byteLength(str, 'us-ascii');

http.createServer(function(req, res) {
    req.setEncoding('big5');
    req.collect(function(err, body) {
        console.log(body);
    });
});

fs.createReadStream("file.txt", "shift_jis");

// External modules are also supported (if they use Node primitives, which they probably do).
request = require('request');
request({
    url: "http://github.com/", 
    encoding: "cp932"
});

// To remove extensions
iconv.undoExtendNodeEncodings();

Most singlebyte encodings are generated automatically from node-iconv. Thank you Ben Noordhuis and libiconv authors!

Multibyte encodings are generated from Unicode.org mappings and WHATWG Encoding Standard mappings. Thank you, respective authors!

Encoding/decoding speed

Comparison with node-iconv module (1000x256kb, on MacBook Pro, Core i5/2.6 GHz, Node v0.12.0). Note: your results may vary, so please always check on your hardware.

operation iconv@2.1.4 iconv-lite@0.4.7 ———————————————————- encode(‘win1251’) ~96 Mb/s ~320 Mb/s decode(‘win1251’) ~95 Mb/s ~246 Mb/s

BOM handling

UTF-16 Encodings

This library supports UTF-16LE, UTF-16BE and UTF-16 encodings. First two are straightforward, but UTF-16 is trying to be smart about endianness in the following ways: * Decoding: uses BOM and ‘spaces heuristic’ to determine input endianness. Default is UTF-16LE, but can be overridden with defaultEncoding: 'utf-16be' option. Strips BOM unless stripBOM: false. * Encoding: uses UTF-16LE and writes BOM by default. Use addBOM: false to override.

Other notes

When decoding, be sure to supply a Buffer to decode() method, otherwise bad things usually happen.
Untranslatable characters are set to � or ?. No transliteration is currently supported.
Node versions 0.10.31 and 0.11.13 are buggy, don’t use them (see #65, #77).

Testing

$ git clone git@github.com:ashtuchkin/iconv-lite.git
$ cd iconv-lite
$ npm install
$ npm test
    
$ # To view performance:
$ node test/performance.js

$ # To view test coverage:
$ npm run coverage
$ open coverage/lcov-report/index.html


@datastructures-js/trie

build:? npm npm npm

Trie implementation in javascript. Each Trie node holds one character of a word.

Trie
Trie


Table of Contents

Install

npm install --save @datastructures-js/trie

API

require

const Trie = require('@datastructures-js/trie');

import

import Trie from '@datastructures-js/trie';

Construction

// example
const englishLang = new Trie();

.insert(word)

insert a string word into the trie.

params
name type
word string
return
TrieNode
runtime
O(k) : k = length of the word

Example

englishLang.insert('hi');
englishLang.insert('hit');
englishLang.insert('hide');
englishLang.insert('hello');
englishLang.insert('sand');
englishLang.insert('safe');
englishLang.insert('noun');
englishLang.insert('name');

Note: the empty string is not a default word in the trie. You can add the empty word explicitly using .insert('')

.has(word)

checks if a word exists in the trie.

params
name type
word string
return
boolean
runtime
O(k) : k = length of the word

Example

englishLang.has('hi'); // true
englishLang.has('sky'); // false

.find(word)

finds a word in the trie and returns the node of its last character.

params
name type
word string
return
TrieNode
runtime
O(k) : k = length of the word

Example

const hi = englishLang.find('hi');
// hi.getChar() = 'i'
// hi.getParent().getChar() = 'h'

const safe = englishLang.find('safe');
// safe.getChar() = 'e'
// safe.getParent().getChar() = 'f'
// safe.getParent().getParent().getChar() = 'a'

.remove(word)

removes a word from the trie.

params
name type
word string
return
boolean
runtime
O(k) : k = length of the word

Example

englishLang.remove('hi'); // true - hi removed
englishLang.remove('sky'); // false - nothing is removed

.forEach(cb)

traverses all words in the trie.

params
name type description
cb function called with each word in the trie
runtime
O(n) : n = number of nodes in the trie
englishLang.forEach((word) => console.log(word));

/*
hit
hide
hello
sand
safe
noun
name
*/

.toArray()

converts the trie into an array of words.

return description
array a list of all the words in the trie
runtime
O(n) : n = number of nodes in the trie

Example

console.log(englishLang.toArray());

// ['hit', 'hide', 'hello', 'sand', 'safe', 'noun', 'name']

.wordsCount()

gets the count of words in the trie.

return
number
runtime
O(1)

Example

console.log(englishLang.wordsCount()); // 7

.nodesCount()

gets the count of nodes in the trie.

return
number
runtime
O(1)

Example

console.log(englishLang.nodesCount()); // 23

.clear()

clears the trie.

runtime
O(1)

Example

englishLang.clear();
console.log(englishLang.wordsCount()); // 0
console.log(englishLang.nodesCount()); // 1

TrieNode

.getChar()

returns the node’s char.

return
string

.getParent()

returns the parent node.

return
TrieNode

.isEndOfWord()

check if a node is an end of a word.

return
boolean

.getChild(char)

returns the child node of a char.

return
TrieNode

.hasChild(char)

check the node has a child char.

return
boolean

.childrenCount()

returns the number of children nodes.

return
number

Build

grunt build

NPM version build status Test coverage Downloads Join the chat at https://gitter.im/eslint/doctrine



Doctrine

Doctrine is a JSDoc parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file).

Installation

You can install Doctrine using npm:

npm install doctrine --save-dev

Doctrine can also be used in web browsers using Browserify.

Usage

Require doctrine inside of your JavaScript:

var doctrine = require("doctrine");

parse()

The primary method is parse(), which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are:

Here’s a simple example:

var ast = doctrine.parse(
    [
        "/**",
        " * This function comment is parsed by doctrine",
        " * @param {{ok:String}} userName",
        "*/"
    ].join('\n'), { unwrap: true });

This example returns the following AST:

{ “description”: “This function comment is parsed by doctrine”, “tags”: [ { “title”: “param”, “description”: null, “type”: { “type”: “RecordType”, “fields”: [ { “type”: “FieldType”, “key”: “ok”, “value”: { “type”: “NameExpression”, “name”: “String” } } ] }, “name”: “userName” } ] }

See the demo page more detail.

Team

These folks keep the project moving and are resources for help:

Contributing

Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the ESLint Contributor Guidelines, so please be sure to read them before contributing. If you’re not sure where to dig in, check out the issues.

Frequently Asked Questions

Can I pass a whole JavaScript file to Doctrine?

No. Doctrine can only parse JSDoc comments, so you’ll need to pass just the JSDoc comment to Doctrine in order to work.

doctrine

esprima

some of functions is derived from esprima

closure-compiler

some of extensions is derived from closure-compiler

Where to ask for help?

Join our Chatroom



to-regex NPM version NPM monthly downloads NPM total downloads Linux Build Status

Generate a regex from a string or array of strings.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

(TOC generated by verb using markdown-toc)

Install

Install with npm:

$ npm install --save to-regex

Usage

var toRegex = require('to-regex');

console.log(toRegex('foo'));
//=> /^(?:foo)$/

console.log(toRegex('foo', {negate: true}));
//=> /^(?:(?:(?!^(?:foo)$).)*)$/

console.log(toRegex('foo', {contains: true}));
//=> /(?:foo)/

console.log(toRegex(['foo', 'bar'], {negate: true}));
//=> /^(?:(?:(?!^(?:(?:foo)|(?:bar))$).)*)$/

console.log(toRegex(['foo', 'bar'], {negate: true, contains: true}));
//=> /^(?:(?:(?!(?:(?:foo)|(?:bar))).)*)$/

Options

options.contains

Type: Boolean

Default: undefined

Generate a regex that will match any string that contains the given pattern. By default, regex is strict will only return true for exact matches.

var toRegex = require('to-regex');
console.log(toRegex('foo', {contains: true}));
//=> /(?:foo)/

options.negate

Type: Boolean

Default: undefined

Create a regex that will match everything except the given pattern.

var toRegex = require('to-regex');
console.log(toRegex('foo', {negate: true}));
//=> /^(?:(?:(?!^(?:foo)$).)*)$/

options.nocase

Type: Boolean

Default: undefined

Adds the i flag, to enable case-insensitive matching.

var toRegex = require('to-regex');
console.log(toRegex('foo', {nocase: true}));
//=> /^(?:foo)$/i

Alternatively you can pass the flags you want directly on options.flags.

options.flags

Type: String

Default: undefined

Define the flags you want to use on the generated regex.

var toRegex = require('to-regex');
console.log(toRegex('foo', {flags: 'gm'}));
//=> /^(?:foo)$/gm
console.log(toRegex('foo', {flags: 'gmi', nocase: true})); //<= handles redundancy
//=> /^(?:foo)$/gmi

options.cache

Type: Boolean

Default: true

Generated regex is cached based on the provided string and options. As a result, runtime compilation only happens once per pattern (as long as options are also the same), which can result in dramatic speed improvements.

This also helps with debugging, since adding options and pattern are added to the generated regex.

Disable caching

toRegex('foo', {cache: false});

options.safe

Type: Boolean

Default: undefined

Check the generated regular expression with safe-regex and throw an error if the regex is potentially unsafe.

Examples

console.log(toRegex('(x+x+)+y'));
//=> /^(?:(x+x+)+y)$/

// The following would throw an error
toRegex('(x+x+)+y', {safe: true});

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 24, 2018. # psl (Public Suffix List)

NPM

Greenkeeper badge Build Status devDependency Status

psl is a JavaScript domain name parser based on the Public Suffix List.

This implementation is tested against the test data hosted by Mozilla and kindly provided by Comodo.

Cross browser testing provided by BrowserStack

What is the Public Suffix List?

The Public Suffix List is a cross-vendor initiative to provide an accurate list of domain name suffixes.

A “public suffix” is one under which Internet users can directly register names. Some examples of public suffixes are “.com”, “.co.uk” and “pvt.k12.wy.us”. The Public Suffix List is a list of all known public suffixes.

Source: http://publicsuffix.org

Installation

Node.js

npm install --save psl

Browser

Download psl.min.js and include it in a script tag.

<script src="psl.min.js"></script>

This script is browserified and wrapped in a umd wrapper so you should be able to use it standalone or together with a module loader.

API

psl.parse(domain)

Parse domain based on Public Suffix List. Returns an Object with the following properties:

Example:

var psl = require('psl');

// Parse domain without subdomain
var parsed = psl.parse('google.com');
console.log(parsed.tld); // 'com'
console.log(parsed.sld); // 'google'
console.log(parsed.domain); // 'google.com'
console.log(parsed.subdomain); // null

// Parse domain with subdomain
var parsed = psl.parse('www.google.com');
console.log(parsed.tld); // 'com'
console.log(parsed.sld); // 'google'
console.log(parsed.domain); // 'google.com'
console.log(parsed.subdomain); // 'www'

// Parse domain with nested subdomains
var parsed = psl.parse('a.b.c.d.foo.com');
console.log(parsed.tld); // 'com'
console.log(parsed.sld); // 'foo'
console.log(parsed.domain); // 'foo.com'
console.log(parsed.subdomain); // 'a.b.c.d'

psl.get(domain)

Get domain name, sld + tld. Returns null if not valid.

Example:

var psl = require('psl');

// null input.
psl.get(null); // null

// Mixed case.
psl.get('COM'); // null
psl.get('example.COM'); // 'example.com'
psl.get('WwW.example.COM'); // 'example.com'

// Unlisted TLD.
psl.get('example'); // null
psl.get('example.example'); // 'example.example'
psl.get('b.example.example'); // 'example.example'
psl.get('a.b.example.example'); // 'example.example'

// TLD with only 1 rule.
psl.get('biz'); // null
psl.get('domain.biz'); // 'domain.biz'
psl.get('b.domain.biz'); // 'domain.biz'
psl.get('a.b.domain.biz'); // 'domain.biz'

// TLD with some 2-level rules.
psl.get('uk.com'); // null);
psl.get('example.uk.com'); // 'example.uk.com');
psl.get('b.example.uk.com'); // 'example.uk.com');

// More complex TLD.
psl.get('c.kobe.jp'); // null
psl.get('b.c.kobe.jp'); // 'b.c.kobe.jp'
psl.get('a.b.c.kobe.jp'); // 'b.c.kobe.jp'
psl.get('city.kobe.jp'); // 'city.kobe.jp'
psl.get('www.city.kobe.jp'); // 'city.kobe.jp'

// IDN labels.
psl.get('食狮.com.cn'); // '食狮.com.cn'
psl.get('食狮.公司.cn'); // '食狮.公司.cn'
psl.get('www.食狮.公司.cn'); // '食狮.公司.cn'

// Same as above, but punycoded.
psl.get('xn--85x722f.com.cn'); // 'xn--85x722f.com.cn'
psl.get('xn--85x722f.xn--55qx5d.cn'); // 'xn--85x722f.xn--55qx5d.cn'
psl.get('www.xn--85x722f.xn--55qx5d.cn'); // 'xn--85x722f.xn--55qx5d.cn'

psl.isValid(domain)

Check whether a domain has a valid Public Suffix. Returns a Boolean indicating whether the domain has a valid Public Suffix.

Example

var psl = require('psl');

psl.isValid('google.com'); // true
psl.isValid('www.google.com'); // true
psl.isValid('x.yz'); // false

Testing and Building

Test are written using mocha and can be run in two different environments: node and phantomjs.

# This will run `eslint`, `mocha` and `karma`.
npm test

# Individual test environments
# Run tests in node only.
./node_modules/.bin/mocha test
# Run tests in phantomjs only.
./node_modules/.bin/karma start ./karma.conf.js --single-run

# Build data (parse raw list) and create dist files
npm run build

Feel free to fork if you see possible improvements!

Acknowledgements

esutils Build Status

esutils (esutils) is utility box for ECMAScript language tools.

API

ast

ast.isExpression(node)

Returns true if node is an Expression as defined in ECMA262 edition 5.1 section 11.

ast.isStatement(node)

Returns true if node is a Statement as defined in ECMA262 edition 5.1 section 12.

ast.isIterationStatement(node)

Returns true if node is an IterationStatement as defined in ECMA262 edition 5.1 section 12.6.

ast.isSourceElement(node)

Returns true if node is a SourceElement as defined in ECMA262 edition 5.1 section 14.

ast.trailingStatement(node)

Returns Statement? if node has trailing Statement.

if (cond)
    consequent;

When taking this IfStatement, returns consequent; statement.

ast.isProblematicIfStatement(node)

Returns true if node is a problematic IfStatement. If node is a problematic IfStatement, node cannot be represented as an one on one JavaScript code.

{
    type: 'IfStatement',
    consequent: {
        type: 'WithStatement',
        body: {
            type: 'IfStatement',
            consequent: {type: 'EmptyStatement'}
        }
    },
    alternate: {type: 'EmptyStatement'}
}

The above node cannot be represented as a JavaScript code, since the top level else alternate belongs to an inner IfStatement.

code

code.isDecimalDigit(code)

Return true if provided code is decimal digit.

code.isHexDigit(code)

Return true if provided code is hexadecimal digit.

code.isOctalDigit(code)

Return true if provided code is octal digit.

code.isWhiteSpace(code)

Return true if provided code is white space. White space characters are formally defined in ECMA262.

code.isLineTerminator(code)

Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262.

code.isIdentifierStart(code)

Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262.

code.isIdentifierPart(code)

Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262.

keyword

keyword.isKeywordES5(id, strict)

Returns true if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections 7.6.1.1 and 7.6.1.2, respectively. If the strict flag is truthy, this function additionally checks whether id is a Keyword or Future Reserved Word under strict mode.

keyword.isKeywordES6(id, strict)

Returns true if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections 11.6.2.1 and 11.6.2.2, respectively. If the strict flag is truthy, this function additionally checks whether id is a Keyword or Future Reserved Word under strict mode.

keyword.isReservedWordES5(id, strict)

Returns true if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section 7.6.1. If the strict flag is truthy, this function additionally checks whether id is a Reserved Word under strict mode.

keyword.isReservedWordES6(id, strict)

Returns true if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section 11.6.2. If the strict flag is truthy, this function additionally checks whether id is a Reserved Word under strict mode.

keyword.isRestrictedWord(id)

Returns true if provided identifier string is one of eval or arguments. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section 12.1.1.

keyword.isIdentifierNameES5(id)

Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section 7.6.

keyword.isIdentifierNameES6(id)

Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section 11.6.

keyword.isIdentifierES5(id, strict)

Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section 7.6. If the strict flag is truthy, this function additionally checks whether id is an Identifier under strict mode.

keyword.isIdentifierES6(id, strict)

Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section 12.1. If the strict flag is truthy, this function additionally checks whether id is an Identifier under strict mode.



readdirp Weekly downloads

Recursive version of fs.readdir. Exposes a stream API and a promise API.

npm install readdirp
const readdirp = require('readdirp');

// Use streams to achieve small RAM & CPU footprint.
// 1) Streams example with for-await.
for await (const entry of readdirp('.')) {
  const {path} = entry;
  console.log(`${JSON.stringify({path})}`);
}

// 2) Streams example, non for-await.
// Print out all JS files along with their size within the current folder & subfolders.
readdirp('.', {fileFilter: '*.js', alwaysStat: true})
  .on('data', (entry) => {
    const {path, stats: {size}} = entry;
    console.log(`${JSON.stringify({path, size})}`);
  })
  // Optionally call stream.destroy() in `warn()` in order to abort and cause 'close' to be emitted
  .on('warn', error => console.error('non-fatal error', error))
  .on('error', error => console.error('fatal error', error))
  .on('end', () => console.log('done'));

// 3) Promise example. More RAM and CPU than streams / for-await.
const files = await readdirp.promise('.');
console.log(files.map(file => file.path));

// Other options.
readdirp('test', {
  fileFilter: '*.js',
  directoryFilter: ['!.git', '!*modules']
  // directoryFilter: (di) => di.basename.length === 9
  type: 'files_directories',
  depth: 1
});

For more examples, check out examples directory.

API

const stream = readdirp(root[, options])Stream API

const entries = await readdirp.promise(root[, options])Promise API. Returns a list of entry infos.

First argument is awalys root, path in which to start reading and recursing into subdirectories.

options

EntryInfo

Has the following properties:

Changelog



eslint-plugin-prettier Build Status

Runs Prettier as an ESLint rule and reports differences as individual ESLint issues.

If your desired formatting does not match Prettier’s output, you should use a different tool such as prettier-eslint instead.

Sample

error: Insert `,` (prettier/prettier) at pkg/commons-atom/ActiveEditorRegistry.js:22:25:
  20 | import {
  21 |   observeActiveEditorsDebounced,
> 22 |   editorChangesDebounced
     |                         ^
  23 | } from './debounced';;
  24 |
  25 | import {observableFromSubscribeFunction} from '../commons-node/event';


error: Delete `;` (prettier/prettier) at pkg/commons-atom/ActiveEditorRegistry.js:23:21:
  21 |   observeActiveEditorsDebounced,
  22 |   editorChangesDebounced
> 23 | } from './debounced';;
     |                     ^
  24 |
  25 | import {observableFromSubscribeFunction} from '../commons-node/event';
  26 | import {cacheWhileSubscribed} from '../commons-node/observable';


2 errors found.

./node_modules/.bin/eslint --format codeframe pkg/commons-atom/ActiveEditorRegistry.js (code from nuclide).

Installation

npm install --save-dev eslint-plugin-prettier
npm install --save-dev --save-exact prettier

eslint-plugin-prettier does not install Prettier or ESLint for you. You must install these yourself.

Then, in your .eslintrc.json:

{
  "plugins": ["prettier"],
  "rules": {
    "prettier/prettier": "error"
  }
}

This plugin works best if you disable all other ESLint rules relating to code formatting, and only enable rules that detect potential bugs. (If another active ESLint rule disagrees with prettier about how code should be formatted, it will be impossible to avoid lint errors.) You can use eslint-config-prettier to disable all formatting-related ESLint rules.

This plugin ships with a plugin:prettier/recommended config that sets up both the plugin and eslint-config-prettier in one go.

  1. In addition to the above installation instructions, install eslint-config-prettier:

  2. Then you need to add plugin:prettier/recommended as the last extension in your .eslintrc.json:

    You can then set Prettier’s own options inside a .prettierrc file.

  3. Some ESLint plugins (such as eslint-plugin-react) also contain rules that conflict with Prettier. Add extra exclusions for the plugins you use like so:

    For the list of every available exclusion rule set, please see the readme of eslint-config-prettier.

Exactly what does plugin:prettier/recommended do? Well, this is what it expands to:

{
  "extends": ["prettier"],
  "plugins": ["prettier"],
  "rules": {
    "prettier/prettier": "error",
    "arrow-body-style": "off",
    "prefer-arrow-callback": "off"
  }
}

arrow-body-style and prefer-arrow-callback issue

If you use arrow-body-style or prefer-arrow-callback together with the prettier/prettier rule from this plugin, you can in some cases end up with invalid code due to a bug in ESLint’s autofix – see issue #65.

For this reason, it’s recommended to turn off these rules. The plugin:prettier/recommended config does that for you.

You can still use these rules together with this plugin if you want, because the bug does not occur all the time. But if you do, you need to keep in mind that you might end up with invalid code, where you manually have to insert a missing closing parenthesis to get going again.

If you’re fixing large of amounts of previously unformatted code, consider temporarily disabling the prettier/prettier rule and running eslint --fix and prettier --write separately.

Options

Note: While it is possible to pass options to Prettier via your ESLint configuration file, it is not recommended because editor extensions such as prettier-atom and prettier-vscode will read .prettierrc, but won’t read settings from ESLint, which can lead to an inconsistent experience.


Contributing

See CONTRIBUTING.md



stream-http Build Status

Sauce Test Status

This module is an implementation of Node’s native http module for the browser. It tries to match Node’s API and behavior as closely as possible, but some features aren’t available, since browsers don’t give nearly as much control over requests.

This is heavily inspired by, and intended to replace, http-browserify.

What does it do?

In accordance with its name, stream-http tries to provide data to its caller before the request has completed whenever possible.

Backpressure, allowing the browser to only pull data from the server as fast as it is consumed, is supported in: * Chrome >= 58 (using fetch and WritableStream)

The following browsers support true streaming, where only a small amount of the request has to be held in memory at once: * Chrome >= 43 (using the fetch API) * Firefox >= 9 (using moz-chunked-arraybuffer responseType with xhr)

The following browsers support pseudo-streaming, where the data is available before the request finishes, but the entire response must be held in memory: * Chrome * Safari >= 5, and maybe older * IE >= 10 * Most other Webkit-based browsers, including the default Android browser

All browsers newer than IE8 support binary responses. All of the above browsers that support true streaming or pseudo-streaming support that for binary data as well except for IE10. Old (presto-based) Opera also does not support binary streaming either.

IE8 note:

As of version 2.0.0, IE8 support requires the user to supply polyfills for Object.keys, Array.prototype.forEach, and Array.prototype.indexOf. Example implementations are provided in ie8-polyfill.js; alternately, you may want to consider using es5-shim. All browsers with full ES5 support shouldn’t require any polyfills.

How do you use it?

The intent is to have the same API as the client part of the Node HTTP module. The interfaces are the same wherever practical, although limitations in browsers make an exact clone of the Node API impossible.

This module implements http.request, http.get, and most of http.ClientRequest and http.IncomingMessage in addition to http.METHODS and http.STATUS_CODES. See the Node docs for how these work.

Extra features compared to Node

This module has to make some tradeoffs to support binary data and/or streaming. Generally, the module can make a fairly good decision about which underlying browser features to use, but sometimes it helps to get a little input from the developer.

Features missing compared to Node

Example

http.get('/bundle.js', function (res) {
    var div = document.getElementById('result');
    div.innerHTML += 'GET /beep<br>';

    res.on('data', function (buf) {
        div.innerHTML += buf;
    });

    res.on('end', function () {
        div.innerHTML += '<br>__END__';
    });
})

Running tests

There are two sets of tests: the tests that run in Node (found in test/node) and the tests that run in the browser (found in test/browser). Normally the browser tests run on Sauce Labs.

Running npm test will run both sets of tests, but in order for the Sauce Labs tests to run you will need to sign up for an account (free for open source projects) and put the credentials in a .zuulrc file.

To run just the Node tests, run npm run test-node.

To run the browser tests locally, run npm run test-browser-local and point your browser to http://localhost:8080/__zuul



Form-Data NPM Module Join the chat at https://gitter.im/form-data/form-data

A library to create readable "multipart/form-data" streams. Can be used to submit forms and file uploads to other web applications.

The API of this library is inspired by the XMLHttpRequest-2 FormData Interface.

Linux Build MacOS Build Windows Build

Coverage Status Dependency Status bitHound Overall Score

Install

npm install --save form-data

Usage

In this example we are constructing a form with 3 fields that contain a string, a buffer and a file stream.

var FormData = require('form-data');
var fs = require('fs');

var form = new FormData();
form.append('my_field', 'my value');
form.append('my_buffer', new Buffer(10));
form.append('my_file', fs.createReadStream('/foo/bar.jpg'));

Also you can use http-response stream:

var FormData = require('form-data');
var http = require('http');

var form = new FormData();

http.request('http://nodejs.org/images/logo.png', function(response) {
  form.append('my_field', 'my value');
  form.append('my_buffer', new Buffer(10));
  form.append('my_logo', response);
});

Or @mikeal’s request stream:

var FormData = require('form-data');
var request = require('request');

var form = new FormData();

form.append('my_field', 'my value');
form.append('my_buffer', new Buffer(10));
form.append('my_logo', request('http://nodejs.org/images/logo.png'));

In order to submit this form to a web application, call submit(url, [callback]) method:

form.submit('http://example.org/', function(err, res) {
  // res – response object (http.IncomingMessage)  //
  res.resume();
});

For more advanced request manipulations submit() method returns http.ClientRequest object, or you can choose from one of the alternative submission methods.

Custom options

You can provide custom options, such as maxDataSize:

var FormData = require('form-data');

var form = new FormData({ maxDataSize: 20971520 });
form.append('my_field', 'my value');
form.append('my_buffer', /* something big */);

List of available options could be found in combined-stream

Alternative submission methods

You can use node’s http client interface:

var http = require('http');

var request = http.request({
  method: 'post',
  host: 'example.org',
  path: '/upload',
  headers: form.getHeaders()
});

form.pipe(request);

request.on('response', function(res) {
  console.log(res.statusCode);
});

Or if you would prefer the 'Content-Length' header to be set for you:

form.submit('example.org/upload', function(err, res) {
  console.log(res.statusCode);
});

To use custom headers and pre-known length in parts:

var CRLF = '\r\n';
var form = new FormData();

var options = {
  header: CRLF + '--' + form.getBoundary() + CRLF + 'X-Custom-Header: 123' + CRLF + CRLF,
  knownLength: 1
};

form.append('my_buffer', buffer, options);

form.submit('http://example.com/', function(err, res) {
  if (err) throw err;
  console.log('Done');
});

Form-Data can recognize and fetch all the required information from common types of streams (fs.readStream, http.response and mikeals request), for some other types of streams you’d need to provide “file”-related information manually:

someModule.stream(function(err, stdout, stderr) {
  if (err) throw err;

  var form = new FormData();

  form.append('file', stdout, {
    filename: 'unicycle.jpg', // ... or:
    filepath: 'photos/toys/unicycle.jpg',
    contentType: 'image/jpeg',
    knownLength: 19806
  });

  form.submit('http://example.com/', function(err, res) {
    if (err) throw err;
    console.log('Done');
  });
});

The filepath property overrides filename and may contain a relative path. This is typically used when uploading multiple files from a directory.

For edge cases, like POST request to URL with query string or to pass HTTP auth credentials, object can be passed to form.submit() as first parameter:

form.submit({
  host: 'example.com',
  path: '/probably.php?extra=params',
  auth: 'username:password'
}, function(err, res) {
  console.log(res.statusCode);
});

In case you need to also send custom HTTP headers with the POST request, you can use the headers key in first parameter of form.submit():

form.submit({
  host: 'example.com',
  path: '/surelynot.php',
  headers: {'x-test-header': 'test-header-value'}
}, function(err, res) {
  console.log(res.statusCode);
});

Integration with other libraries

Request

Form submission using request:

var formData = {
  my_field: 'my_value',
  my_file: fs.createReadStream(__dirname + '/unicycle.jpg'),
};

request.post({url:'http://service.com/upload', formData: formData}, function(err, httpResponse, body) {
  if (err) {
    return console.error('upload failed:', err);
  }
  console.log('Upload successful!  Server responded with:', body);
});

For more details see request readme.

node-fetch

You can also submit a form using node-fetch:

var form = new FormData();

form.append('a', 1);

fetch('http://example.com', { method: 'POST', body: form })
    .then(function(res) {
        return res.json();
    }).then(function(json) {
        console.log(json);
    });

Notes



functional-red-black-tree

A fully persistent red-black tree written 100% in JavaScript. Works both in node.js and in the browser via browserify.

Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy.

Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module.



Install

npm install functional-red-black-tree



Example

Here is an example of some basic usage:

//Load the library
var createTree = require("functional-red-black-tree")

//Create a tree
var t1 = createTree()

//Insert some items into the tree
var t2 = t1.insert(1, "foo")
var t3 = t2.insert(2, "bar")

//Remove something
var t4 = t3.remove(1)


API

var createTree = require("functional-red-black-tree")

Overview

Tree methods

var tree = createTree([compare])

Creates an empty functional tree

Returns An empty tree ordered by compare

tree.keys

A sorted array of all the keys in the tree

tree.values

An array array of all the values in the tree

tree.length

The number of items in the tree

tree.get(key)

Retrieves the value associated to the given key

Returns The value of the first node associated to key

tree.insert(key, value)

Creates a new tree with the new pair inserted.

Returns A new tree with key and value inserted

tree.remove(key)

Removes the first item with key in the tree

Returns A new tree with the given item removed if it exists

tree.find(key)

Returns an iterator pointing to the first item in the tree with key, otherwise null.

tree.ge(key)

Find the first item in the tree whose key is >= key

Returns An iterator at the given element.

tree.gt(key)

Finds the first item in the tree whose key is > key

Returns An iterator at the given element

tree.lt(key)

Finds the last item in the tree whose key is < key

Returns An iterator at the given element

tree.le(key)

Finds the last item in the tree whose key is <= key

Returns An iterator at the given element

tree.at(position)

Finds an iterator starting at the given element

Returns An iterator starting at position

tree.begin

An iterator pointing to the first element in the tree

tree.end

An iterator pointing to the last element in the tree

tree.forEach(visitor(key,value)[, lo[, hi]])

Walks a visitor function over the nodes of the tree in order.

Returns The last value returned by the callback

tree.root

Returns the root node of the tree

Node properties

Each node of the tree has the following properties:

node.key

The key associated to the node

node.value

The value associated to the node

node.left

The left subtree of the node

node.right

The right subtree of the node

Iterator methods

iter.key

The key of the item referenced by the iterator

iter.value

The value of the item referenced by the iterator

iter.node

The value of the node at the iterator’s current position. null is iterator is node valid.

iter.tree

The tree associated to the iterator

iter.index

Returns the position of this iterator in the sequence.

iter.valid

Checks if the iterator is valid

iter.clone()

Makes a copy of the iterator

iter.remove()

Removes the item at the position of the iterator

Returns A new binary search tree with iter’s item removed

iter.update(value)

Updates the value of the node in the tree at this iterator

Returns A new binary search tree with the corresponding node updated

iter.next()

Advances the iterator to the next position

iter.prev()

Moves the iterator backward one element

iter.hasNext

If true, then the iterator is not at the end of the sequence

iter.hasPrev

If true, then the iterator is not at the beginning of the sequence



Credits

Saucelabs Test Status

Fast elliptic-curve cryptography in a plain javascript implementation.

NOTE: Please take a look at http://safecurves.cr.yp.to/ before choosing a curve for your cryptography operations.

Incentive

ECC is much slower than regular RSA cryptography, the JS implementations are even more slower.

Benchmarks

$ node benchmarks/index.js
Benchmarking: sign
elliptic#sign x 262 ops/sec ±0.51% (177 runs sampled)
eccjs#sign x 55.91 ops/sec ±0.90% (144 runs sampled)
------------------------
Fastest is elliptic#sign
========================
Benchmarking: verify
elliptic#verify x 113 ops/sec ±0.50% (166 runs sampled)
eccjs#verify x 48.56 ops/sec ±0.36% (125 runs sampled)
------------------------
Fastest is elliptic#verify
========================
Benchmarking: gen
elliptic#gen x 294 ops/sec ±0.43% (176 runs sampled)
eccjs#gen x 62.25 ops/sec ±0.63% (129 runs sampled)
------------------------
Fastest is elliptic#gen
========================
Benchmarking: ecdh
elliptic#ecdh x 136 ops/sec ±0.85% (156 runs sampled)
------------------------
Fastest is elliptic#ecdh
========================

API

ECDSA

var EC = require('elliptic').ec;

// Create and initialize EC context
// (better do it once and reuse it)
var ec = new EC('secp256k1');

// Generate keys
var key = ec.genKeyPair();

// Sign the message's hash (input must be an array, or a hex-string)
var msgHash = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
var signature = key.sign(msgHash);

// Export DER encoded signature in Array
var derSign = signature.toDER();

// Verify signature
console.log(key.verify(msgHash, derSign));

// CHECK WITH NO PRIVATE KEY

var pubPoint = key.getPublic();
var x = pubPoint.getX();
var y = pubPoint.getY();

// Public Key MUST be either:
// 1) '04' + hex string of x + hex string of y; or
// 2) object with two hex string properties (x and y); or
// 3) object with two buffer properties (x and y)
var pub = pubPoint.encode('hex');                                 // case 1
var pub = { x: x.toString('hex'), y: y.toString('hex') };         // case 2
var pub = { x: x.toBuffer(), y: y.toBuffer() };                   // case 3
var pub = { x: x.toArrayLike(Buffer), y: y.toArrayLike(Buffer) }; // case 3

// Import public key
var key = ec.keyFromPublic(pub, 'hex');

// Signature MUST be either:
// 1) DER-encoded signature as hex-string; or
// 2) DER-encoded signature as buffer; or
// 3) object with two hex-string properties (r and s); or
// 4) object with two buffer properties (r and s)

var signature = '3046022100...'; // case 1
var signature = new Buffer('...'); // case 2
var signature = { r: 'b1fc...', s: '9c42...' }; // case 3

// Verify signature
console.log(key.verify(msgHash, signature));

EdDSA

var EdDSA = require('elliptic').eddsa;

// Create and initialize EdDSA context
// (better do it once and reuse it)
var ec = new EdDSA('ed25519');

// Create key pair from secret
var key = ec.keyFromSecret('693e3c...'); // hex string, array or Buffer

// Sign the message's hash (input must be an array, or a hex-string)
var msgHash = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
var signature = key.sign(msgHash).toHex();

// Verify signature
console.log(key.verify(msgHash, signature));

// CHECK WITH NO PRIVATE KEY

// Import public key
var pub = '0a1af638...';
var key = ec.keyFromPublic(pub, 'hex');

// Verify signature
var signature = '70bed1...';
console.log(key.verify(msgHash, signature));

ECDH

var EC = require('elliptic').ec;
var ec = new EC('curve25519');

// Generate keys
var key1 = ec.genKeyPair();
var key2 = ec.genKeyPair();

var shared1 = key1.derive(key2.getPublic());
var shared2 = key2.derive(key1.getPublic());

console.log('Both shared secrets are BN instances');
console.log(shared1.toString(16));
console.log(shared2.toString(16));

three and more members:

var EC = require('elliptic').ec;
var ec = new EC('curve25519');

var A = ec.genKeyPair();
var B = ec.genKeyPair();
var C = ec.genKeyPair();

var AB = A.getPublic().mul(B.getPrivate())
var BC = B.getPublic().mul(C.getPrivate())
var CA = C.getPublic().mul(A.getPrivate())

var ABC = AB.mul(C.getPrivate())
var BCA = BC.mul(A.getPrivate())
var CAB = CA.mul(B.getPrivate())

console.log(ABC.getX().toString(16))
console.log(BCA.getX().toString(16))
console.log(CAB.getX().toString(16))

NOTE: .derive() returns a BN instance.

Elliptic.js support following curve types:

Following curve ‘presets’ are embedded into the library:

NOTE: That curve25519 could not be used for ECDSA, use ed25519 instead.

Implementation details

ECDSA is using deterministic k value generation as per RFC6979. Most of the curve operations are performed on non-affine coordinates (either projective or extended), various windowing techniques are used for different cases.

All operations are performed in reduction context using bn.js, hashing is provided by hash.js



bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

The only available postfix at the moment is:

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:



bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

The only available postfix at the moment is:

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:



bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

The only available postfix at the moment is:

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:



bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

The only available postfix at the moment is:

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:



bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

The only available postfix at the moment is:

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:



bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

The only available postfix at the moment is:

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:



normalize-package-data Build Status

normalize-package-data exports a function that normalizes package metadata. This data is typically found in a package.json file, but in principle could come from any source - for example the npm registry.

normalize-package-data is used by read-package-json to normalize the data it reads from a package.json file. In turn, read-package-json is used by npm and various npm-related tools.

Installation

npm install normalize-package-data

Usage

Basic usage is really simple. You call the function that normalize-package-data exports. Let’s call it normalizeData.

normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData)
// packageData is now normalized

Strict mode

You may activate strict validation by passing true as the second argument.

normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData, true)
// packageData is now normalized

If strict mode is activated, only Semver 2.0 version strings are accepted. Otherwise, Semver 1.0 strings are accepted as well. Packages must have a name, and the name field must not have contain leading or trailing whitespace.

Warnings

Optionally, you may pass a “warning” function. It gets called whenever the normalizeData function encounters something that doesn’t look right. It indicates less than perfect input data.

normalizeData = require('normalize-package-data')
packageData = require("./package.json")
warnFn = function(msg) { console.error(msg) }
normalizeData(packageData, warnFn)
// packageData is now normalized. Any number of warnings may have been logged.

You may combine strict validation with warnings by passing true as the second argument, and warnFn as third.

When private field is set to true, warnings will be suppressed.

Potential exceptions

If the supplied data has an invalid name or version vield, normalizeData will throw an error. Depending on where you call normalizeData, you may want to catch these errors so can pass them to a callback.

What normalization (currently) entails

Rules for name field

If name field is given, the value of the name field must be a string. The string may not:

Rules for version field

If version field is given, the value of the version field must be a valid semver string, as determined by the semver.valid method. See documentation for the semver module.

Credits

This package contains code based on read-package-json written by Isaac Z. Schlueter. Used with permisson.



normalize-package-data Build Status

normalize-package-data exports a function that normalizes package metadata. This data is typically found in a package.json file, but in principle could come from any source - for example the npm registry.

normalize-package-data is used by read-package-json to normalize the data it reads from a package.json file. In turn, read-package-json is used by npm and various npm-related tools.

Installation

npm install normalize-package-data

Usage

Basic usage is really simple. You call the function that normalize-package-data exports. Let’s call it normalizeData.

normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData)
// packageData is now normalized

Strict mode

You may activate strict validation by passing true as the second argument.

normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData, true)
// packageData is now normalized

If strict mode is activated, only Semver 2.0 version strings are accepted. Otherwise, Semver 1.0 strings are accepted as well. Packages must have a name, and the name field must not have contain leading or trailing whitespace.

Warnings

Optionally, you may pass a “warning” function. It gets called whenever the normalizeData function encounters something that doesn’t look right. It indicates less than perfect input data.

normalizeData = require('normalize-package-data')
packageData = require("./package.json")
warnFn = function(msg) { console.error(msg) }
normalizeData(packageData, warnFn)
// packageData is now normalized. Any number of warnings may have been logged.

You may combine strict validation with warnings by passing true as the second argument, and warnFn as third.

When private field is set to true, warnings will be suppressed.

Potential exceptions

If the supplied data has an invalid name or version vield, normalizeData will throw an error. Depending on where you call normalizeData, you may want to catch these errors so can pass them to a callback.

What normalization (currently) entails

Rules for name field

If name field is given, the value of the name field must be a string. The string may not:

Rules for version field

If version field is given, the value of the version field must be a valid semver string, as determined by the semver.valid method. See documentation for the semver module.

Credits

This package contains code based on read-package-json written by Isaac Z. Schlueter. Used with permisson.

NPM version build status Test coverage Downloads Join the chat at https://gitter.im/eslint/doctrine



Doctrine

Doctrine is a JSDoc parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file).

Installation

You can install Doctrine using npm:

npm install doctrine --save-dev

Doctrine can also be used in web browsers using Browserify.

Usage

Require doctrine inside of your JavaScript:

var doctrine = require("doctrine");

parse()

The primary method is parse(), which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are:

Here’s a simple example:

var ast = doctrine.parse(
    [
        "/**",
        " * This function comment is parsed by doctrine",
        " * @param {{ok:String}} userName",
        "*/"
    ].join('\n'), { unwrap: true });

This example returns the following AST:

{ “description”: “This function comment is parsed by doctrine”, “tags”: [ { “title”: “param”, “description”: null, “type”: { “type”: “RecordType”, “fields”: [ { “type”: “FieldType”, “key”: “ok”, “value”: { “type”: “NameExpression”, “name”: “String” } } ] }, “name”: “userName” } ] }

See the demo page more detail.

Team

These folks keep the project moving and are resources for help:

Contributing

Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the ESLint Contributor Guidelines, so please be sure to read them before contributing. If you’re not sure where to dig in, check out the issues.

Frequently Asked Questions

Can I pass a whole JavaScript file to Doctrine?

No. Doctrine can only parse JSDoc comments, so you’ll need to pass just the JSDoc comment to Doctrine in order to work.

doctrine

esprima

some of functions is derived from esprima

closure-compiler

some of extensions is derived from closure-compiler

Where to ask for help?

Join our Chatroom



is-glob NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Returns true if the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save is-glob

You might also be interested in is-valid-glob and has-glob.

Usage

var isGlob = require('is-glob');

Default behavior

True

Patterns that have glob characters or regex patterns will return true:

isGlob('!foo.js');
isGlob('*.js');
isGlob('**/abc.js');
isGlob('abc/*.js');
isGlob('abc/(aaa|bbb).js');
isGlob('abc/[a-z].js');
isGlob('abc/{a,b}.js');
//=> true

Extglobs

isGlob('abc/@(a).js');
isGlob('abc/!(a).js');
isGlob('abc/+(a).js');
isGlob('abc/*(a).js');
isGlob('abc/?(a).js');
//=> true

False

Escaped globs or extglobs return false:

isGlob('abc/\\@(a).js');
isGlob('abc/\\!(a).js');
isGlob('abc/\\+(a).js');
isGlob('abc/\\*(a).js');
isGlob('abc/\\?(a).js');
isGlob('\\!foo.js');
isGlob('\\*.js');
isGlob('\\*\\*/abc.js');
isGlob('abc/\\*.js');
isGlob('abc/\\(aaa|bbb).js');
isGlob('abc/\\[a-z].js');
isGlob('abc/\\{a,b}.js');
//=> false

Patterns that do not have glob patterns return false:

isGlob('abc.js');
isGlob('abc/def/ghi.js');
isGlob('foo.js');
isGlob('abc/@.js');
isGlob('abc/+.js');
isGlob('abc/?.js');
isGlob();
isGlob(null);
//=> false

Arrays are also false (If you want to check if an array has a glob pattern, use has-glob):

isGlob(['**/*.js']);
isGlob(['foo.js']);
//=> false

Option strict

When options.strict === false the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return false may return true. This is done so that matching libraries like micromatch have a chance at determining if the pattern is a glob or not.

True

Patterns that have glob characters or regex patterns will return true:

isGlob('!foo.js', {strict: false});
isGlob('*.js', {strict: false});
isGlob('**/abc.js', {strict: false});
isGlob('abc/*.js', {strict: false});
isGlob('abc/(aaa|bbb).js', {strict: false});
isGlob('abc/[a-z].js', {strict: false});
isGlob('abc/{a,b}.js', {strict: false});
//=> true

Extglobs

isGlob('abc/@(a).js', {strict: false});
isGlob('abc/!(a).js', {strict: false});
isGlob('abc/+(a).js', {strict: false});
isGlob('abc/*(a).js', {strict: false});
isGlob('abc/?(a).js', {strict: false});
//=> true

False

Escaped globs or extglobs return false:

isGlob('\\!foo.js', {strict: false});
isGlob('\\*.js', {strict: false});
isGlob('\\*\\*/abc.js', {strict: false});
isGlob('abc/\\*.js', {strict: false});
isGlob('abc/\\(aaa|bbb).js', {strict: false});
isGlob('abc/\\[a-z].js', {strict: false});
isGlob('abc/\\{a,b}.js', {strict: false});
//=> false

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
47 jonschlinkert
5 doowb
1 phated
1 danhper
1 paulmillr

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.8.0, on March 27, 2019. Overview Build Status ========

A regex that tokenizes JavaScript.

var jsTokens = require("js-tokens").default

var jsString = "var foo=opts.foo;\n..."

jsString.match(jsTokens)
// ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...]


Installation

npm install js-tokens

import jsTokens from "js-tokens"
// or:
var jsTokens = require("js-tokens").default


Usage

jsTokens

A regex with the g flag that matches JavaScript tokens.

The regex always matches, even invalid JavaScript and the empty string.

The next match is always directly after the previous.

var token = matchToToken(match)

import {matchToToken} from "js-tokens"
// or:
var matchToToken = require("js-tokens").matchToToken

Takes a match returned by jsTokens.exec(string), and returns a {type: String, value: String} object. The following types are available:

Multi-line comments and strings also have a closed property indicating if the token was closed or not (see below).

Comments and strings both come in several flavors. To distinguish them, check if the token starts with //, /*, , or .

Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use is-keyword-js to tell them apart.

Whitespace includes both line terminators and other whitespace.



ECMAScript support

The intention is to always support the latest ECMAScript version whose feature set has been finalized.

If adding support for a newer version requires changes, a new version with a major verion bump will be released.

Currently, ECMAScript 2018 is supported.



Invalid code handling

Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input.

Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input.

Unterminated regex literals are likely matched as division and whatever is inside the regex.

Invalid ASCII characters have their own capturing group.

Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the ES2018 section.

Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple.

Strings may contain invalid escape sequences.



Limitations

Tokenizing JavaScript using regexes—in fact, one single regex—won’t be perfect. But that’s not the point either.

You may compare jsTokens with esprima by using esprima-compare.js. See npm run esprima-compare!

Template string interpolation

Template strings are matched as single tokens, from the starting ` to the ending `, including interpolations (whose tokens are not matched individually).

Matching template string interpolations requires recursive balancing of { and }—something that JavaScript regexes cannot do. Only one level of nesting is supported.

Division and regex literals collision

Consider this example:

var g = 9.82
var number = bar / 2/g

var regex = / 2/g

A human can easily understand that in the number line we’re dealing with division, and in the regex line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the / characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the ES2018 section).

When the jsTokens regex scans throught the above, it will see the following at the end of both the number and regex rows:

/ 2/g

It is then impossible to know if that is a regex literal, or part of an expression dealing with division.

Here is a similar case:

foo /= 2/g
foo(/= 2/g)

The first line divides the foo variable with 2/g. The second line calls the foo function with the regex literal /= 2/g. Again, since jsTokens only sees forwards, it cannot tell the two cases apart.

There are some cases where we can tell division and regex literals apart, though.

First off, we have the simple cases where there’s only one slash in the line:

var foo = 2/g
foo /= 2

Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line.

Secondly, not every character is a valid regex flag.

var number = bar / 2/e

The above example is also correctly identified as division, because e is not a valid regex flag. I initially wanted to future-proof by allowing [a-zA-Z]* (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard g, m, i, y and u flags are allowed. This means that the above example will be identified as division as long as you don’t rename the e variable to some permutation of gmiyus 1 to 6 characters long.

Lastly, we can look forward for information.

Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as:

If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code).

ES2018

ES2018 added some nice regex improvements to the language.

These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features.





fill-range Donate NPM version NPM monthly downloads NPM total downloads Linux Build Status

Fill in a range of numbers or letters, optionally passing an increment or step to use, or create a regex-compatible range with options.toRegex

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save fill-range

Usage

Expands numbers and letters, optionally using a step as the last argument. (Numbers may be defined as JavaScript numbers or strings).

const fill = require('fill-range');
// fill(from, to[, step, options]);

console.log(fill('1', '10')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
console.log(fill('1', '10', { toRegex: true })); //=> [1-9]|10

Params

Examples

By default, an array of values is returned.

Alphabetical ranges

console.log(fill('a', 'e')); //=> ['a', 'b', 'c', 'd', 'e']
console.log(fill('A', 'E')); //=> [ 'A', 'B', 'C', 'D', 'E' ]

Numerical ranges

Numbers can be defined as actual numbers or strings.

console.log(fill(1, 5));     //=> [ 1, 2, 3, 4, 5 ]
console.log(fill('1', '5')); //=> [ 1, 2, 3, 4, 5 ]

Negative ranges

Numbers can be defined as actual numbers or strings.

console.log(fill('-5', '-1')); //=> [ '-5', '-4', '-3', '-2', '-1' ]
console.log(fill('-5', '5')); //=> [ '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5' ]

Steps (increments)

// numerical ranges with increments
console.log(fill('0', '25', 4)); //=> [ '0', '4', '8', '12', '16', '20', '24' ]
console.log(fill('0', '25', 5)); //=> [ '0', '5', '10', '15', '20', '25' ]
console.log(fill('0', '25', 6)); //=> [ '0', '6', '12', '18', '24' ]

// alphabetical ranges with increments
console.log(fill('a', 'z', 4)); //=> [ 'a', 'e', 'i', 'm', 'q', 'u', 'y' ]
console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ]
console.log(fill('a', 'z', 6)); //=> [ 'a', 'g', 'm', 's', 'y' ]

Options

options.step

Type: number (formatted as a string or number)

Default: undefined

Description: The increment to use for the range. Can be used with letters or numbers.

Example(s)

// numbers
console.log(fill('1', '10', 2)); //=> [ '1', '3', '5', '7', '9' ]
console.log(fill('1', '10', 3)); //=> [ '1', '4', '7', '10' ]
console.log(fill('1', '10', 4)); //=> [ '1', '5', '9' ]

// letters
console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ]
console.log(fill('a', 'z', 7)); //=> [ 'a', 'h', 'o', 'v' ]
console.log(fill('a', 'z', 9)); //=> [ 'a', 'j', 's' ]

options.strictRanges

Type: boolean

Default: false

Description: By default, null is returned when an invalid range is passed. Enable this option to throw a RangeError on invalid ranges.

Example(s)

The following are all invalid:

fill('1.1', '2');   // decimals not supported in ranges
fill('a', '2');     // incompatible range values
fill(1, 10, 'foo'); // invalid "step" argument

options.stringify

Type: boolean

Default: undefined

Description: Cast all returned values to strings. By default, integers are returned as numbers.

Example(s)

console.log(fill(1, 5));                    //=> [ 1, 2, 3, 4, 5 ]
console.log(fill(1, 5, { stringify: true })); //=> [ '1', '2', '3', '4', '5' ]

options.toRegex

Type: boolean

Default: undefined

Description: Create a regex-compatible source string, instead of expanding values to an array.

Example(s)

// alphabetical range
console.log(fill('a', 'e', { toRegex: true })); //=> '[a-e]'
// alphabetical with step
console.log(fill('a', 'z', 3, { toRegex: true })); //=> 'a|d|g|j|m|p|s|v|y'
// numerical range
console.log(fill('1', '100', { toRegex: true })); //=> '[1-9]|[1-9][0-9]|100'
// numerical range with zero padding
console.log(fill('000001', '100000', { toRegex: true }));
//=> '0{5}[1-9]|0{4}[1-9][0-9]|0{3}[1-9][0-9]{2}|0{2}[1-9][0-9]{3}|0[1-9][0-9]{4}|100000'

options.transform

Type: function

Default: undefined

Description: Customize each value in the returned array (or string). (you can also pass this function as the last argument to fill()).

Example(s)

// add zero padding
console.log(fill(1, 5, value => String(value).padStart(4, '0')));
//=> ['0001', '0002', '0003', '0004', '0005']

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Commits Contributor
116 jonschlinkert
4 paulmillr
2 realityking
2 bluelovers
1 edorivai
1 wtgtybhertgeghgtwtg

Author

Jon Schlinkert

Please consider supporting me on Patreon, or start your own Patreon page!


This file was generated by verb-generate-readme, v0.8.0, on April 08, 2019. # bn.js

BigNum in pure javascript

Build Status

Install

npm install --save bn.js

Usage

const BN = require('bn.js');

var a = new BN('dead', 16);
var b = new BN('101010', 2);

var res = a.add(b);
console.log(res.toString(10));  // 57047

Note: decimals are not supported in this library.

Notation

Prefixes

There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:

Postfixes

Examples

Instructions

Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).

Utilities

Arithmetics

Bit operations

Reduction

Fast reduction

When doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.

Reduction context

To enable this tricks one should create a reduction context:

var red = BN.red(num);

where num is just a BN instance.

Or:

var red = BN.red(primeName);

Where primeName is either of these Mersenne Primes:

Or:

var red = BN.mont(num);

To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).

Converting numbers

Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:

Here is how one may convert numbers to red:

var redA = a.toRed(red);

Where red is a reduction context created using instructions above

Here is how to convert them back:

var a = redA.fromRed();

Red instructions

Most of the instructions from the very start of this readme have their counterparts in red context:

Number Size

Optimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers.



cacheable-request

Wrap native HTTP requests with RFC compliant cache support

Build Status Coverage Status npm npm

RFC 7234 compliant HTTP caching for native Node.js HTTP/HTTPS requests. Caching works out of the box in memory or is easily pluggable with a wide range of storage adapters.

Note: This is a low level wrapper around the core HTTP modules, it’s not a high level request library.

Features

Install

npm install cacheable-request

Usage

const http = require('http');
const CacheableRequest = require('cacheable-request');

// Then instead of
const req = http.request('http://example.com', cb);
req.end();

// You can do
const cacheableRequest = new CacheableRequest(http.request);
const cacheReq = cacheableRequest('http://example.com', cb);
cacheReq.on('request', req => req.end());
// Future requests to 'example.com' will be returned from cache if still valid

// You pass in any other http.request API compatible method to be wrapped with cache support:
const cacheableRequest = new CacheableRequest(https.request);
const cacheableRequest = new CacheableRequest(electron.net);

Storage Adapters

cacheable-request uses Keyv to support a wide range of storage adapters.

For example, to use Redis as a cache backend, you just need to install the official Redis Keyv storage adapter:

npm install @keyv/redis

And then you can pass CacheableRequest your connection string:

const cacheableRequest = new CacheableRequest(http.request, 'redis://user:pass@localhost:6379');

View all official Keyv storage adapters.

Keyv also supports anything that follows the Map API so it’s easy to write your own storage adapter or use a third-party solution.

e.g The following are all valid storage adapters

const storageAdapter = new Map();
// or
const storageAdapter = require('./my-storage-adapter');
// or
const QuickLRU = require('quick-lru');
const storageAdapter = new QuickLRU({ maxSize: 1000 });

const cacheableRequest = new CacheableRequest(http.request, storageAdapter);

View the Keyv docs for more information on how to use storage adapters.

API

new cacheableRequest(request, storageAdapter)

Returns the provided request function wrapped with cache support.

request

Type: function

Request function to wrap with cache support. Should be http.request or a similar API compatible request function.

storageAdapter

Type: Keyv storage adapter
Default: new Map()

A Keyv storage adapter instance, or connection string if using with an official Keyv storage adapter.

Instance

cacheableRequest(opts, cb)

Returns an event emitter.

opts

Type: object, string

opts.cache

Type: boolean
Default: true

If the cache should be used. Setting this to false will completely bypass the cache for the current request.

opts.strictTtl

Type: boolean
Default: false

If set to true once a cached resource has expired it is deleted and will have to be re-requested.

If set to false (default), after a cached resource’s TTL expires it is kept in the cache and will be revalidated on the next request with If-None-Match/If-Modified-Since headers.

opts.maxTtl

Type: number
Default: undefined

Limits TTL. The number represents milliseconds.

opts.automaticFailover

Type: boolean
Default: false

When set to true, if the DB connection fails we will automatically fallback to a network request. DB errors will still be emitted to notify you of the problem even though the request callback may succeed.

opts.forceRefresh

Type: boolean
Default: false

Forces refreshing the cache. If the response could be retrieved from the cache, it will perform a new request and override the cache instead.

cb

Type: function

The callback function which will receive the response as an argument.

The response can be either a Node.js HTTP response stream or a responselike object. The response will also have a fromCache property set with a boolean value.

.on(‘request’, request)

request event to get the request object of the request.

Note: This event will only fire if an HTTP request is actually made, not when a response is retrieved from cache. However, you should always handle the request event to end the request and handle any potential request errors.

.on(‘response’, response)

response event to get the response object from the HTTP request or cache.

.on(‘error’, error)

error event emitted in case of an error with the cache.

Errors emitted here will be an instance of CacheableRequest.RequestError or CacheableRequest.CacheError. You will only ever receive a RequestError if the request function throws (normally caused by invalid user input). Normal request errors should be handled inside the request event.

To properly handle all error scenarios you should use the following pattern:

cacheableRequest('example.com', cb)
  .on('error', err => {
    if (err instanceof CacheableRequest.CacheError) {
      handleCacheError(err); // Cache error
    } else if (err instanceof CacheableRequest.RequestError) {
      handleRequestError(err); // Request function thrown
    }
  })
  .on('request', req => {
    req.on('error', handleRequestError); // Request error emitted
    req.end();
  });

Note: Database connection errors are emitted here, however cacheable-request will attempt to re-request the resource and bypass the cache on a connection error. Therefore a database connection error doesn’t necessarily mean the request won’t be fulfilled.



asynckit NPM Module

Minimal async jobs utility library, with streams support.

PhantomJS Build Linux Build Windows Build

Coverage Status Dependency Status bitHound Overall Score

AsyncKit provides harness for parallel and serial iterators over list of items represented by arrays or objects. Optionally it accepts abort function (should be synchronously return by iterator for each item), and terminates left over jobs upon an error event. For specific iteration order built-in (ascending and descending) and custom sort helpers also supported, via asynckit.serialOrdered method.

It ensures async operations to keep behavior more stable and prevent Maximum call stack size exceeded errors, from sync iterators.

compression size
asynckit.js 12.34 kB
asynckit.min.js 4.11 kB
asynckit.min.js.gz 1.47 kB

Install

$ npm install --save asynckit

Examples

Parallel Jobs

Runs iterator over provided array in parallel. Stores output in the result array, on the matching positions. In unlikely event of an error from one of the jobs, will terminate rest of the active jobs (if abort function is provided) and return error along with salvaged data to the main callback function.

Input Array

var parallel = require('asynckit').parallel
  , assert   = require('assert')
  ;

var source         = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
  , expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ]
  , expectedTarget = [ 1, 1, 2, 4, 8, 16, 32, 64 ]
  , target         = []
  ;

parallel(source, asyncJob, function(err, result)
{
  assert.deepEqual(result, expectedResult);
  assert.deepEqual(target, expectedTarget);
});

// async job accepts one element from the array
// and a callback function
function asyncJob(item, cb)
{
  // different delays (in ms) per item
  var delay = item * 25;

  // pretend different jobs take different time to finish
  // and not in consequential order
  var timeoutId = setTimeout(function() {
    target.push(item);
    cb(null, item * 2);
  }, delay);

  // allow to cancel "leftover" jobs upon error
  // return function, invoking of which will abort this job
  return clearTimeout.bind(null, timeoutId);
}

More examples could be found in test/test-parallel-array.js.

Input Object

Also it supports named jobs, listed via object.

var parallel = require('asynckit/parallel')
  , assert   = require('assert')
  ;

var source         = { first: 1, one: 1, four: 4, sixteen: 16, sixtyFour: 64, thirtyTwo: 32, eight: 8, two: 2 }
  , expectedResult = { first: 2, one: 2, four: 8, sixteen: 32, sixtyFour: 128, thirtyTwo: 64, eight: 16, two: 4 }
  , expectedTarget = [ 1, 1, 2, 4, 8, 16, 32, 64 ]
  , expectedKeys   = [ 'first', 'one', 'two', 'four', 'eight', 'sixteen', 'thirtyTwo', 'sixtyFour' ]
  , target         = []
  , keys           = []
  ;

parallel(source, asyncJob, function(err, result)
{
  assert.deepEqual(result, expectedResult);
  assert.deepEqual(target, expectedTarget);
  assert.deepEqual(keys, expectedKeys);
});

// supports full value, key, callback (shortcut) interface
function asyncJob(item, key, cb)
{
  // different delays (in ms) per item
  var delay = item * 25;

  // pretend different jobs take different time to finish
  // and not in consequential order
  var timeoutId = setTimeout(function() {
    keys.push(key);
    target.push(item);
    cb(null, item * 2);
  }, delay);

  // allow to cancel "leftover" jobs upon error
  // return function, invoking of which will abort this job
  return clearTimeout.bind(null, timeoutId);
}

More examples could be found in test/test-parallel-object.js.

Serial Jobs

Runs iterator over provided array sequentially. Stores output in the result array, on the matching positions. In unlikely event of an error from one of the jobs, will not proceed to the rest of the items in the list and return error along with salvaged data to the main callback function.

Input Array

var serial = require('asynckit/serial')
  , assert = require('assert')
  ;

var source         = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
  , expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ]
  , expectedTarget = [ 0, 1, 2, 3, 4, 5, 6, 7 ]
  , target         = []
  ;

serial(source, asyncJob, function(err, result)
{
  assert.deepEqual(result, expectedResult);
  assert.deepEqual(target, expectedTarget);
});

// extended interface (item, key, callback)
// also supported for arrays
function asyncJob(item, key, cb)
{
  target.push(key);

  // it will be automatically made async
  // even it iterator "returns" in the same event loop
  cb(null, item * 2);
}

More examples could be found in test/test-serial-array.js.

Input Object

Also it supports named jobs, listed via object.

var serial = require('asynckit').serial
  , assert = require('assert')
  ;

var source         = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
  , expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ]
  , expectedTarget = [ 0, 1, 2, 3, 4, 5, 6, 7 ]
  , target         = []
  ;

var source         = { first: 1, one: 1, four: 4, sixteen: 16, sixtyFour: 64, thirtyTwo: 32, eight: 8, two: 2 }
  , expectedResult = { first: 2, one: 2, four: 8, sixteen: 32, sixtyFour: 128, thirtyTwo: 64, eight: 16, two: 4 }
  , expectedTarget = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
  , target         = []
  ;


serial(source, asyncJob, function(err, result)
{
  assert.deepEqual(result, expectedResult);
  assert.deepEqual(target, expectedTarget);
});

// shortcut interface (item, callback)
// works for object as well as for the arrays
function asyncJob(item, cb)
{
  target.push(item);

  // it will be automatically made async
  // even it iterator "returns" in the same event loop
  cb(null, item * 2);
}

More examples could be found in test/test-serial-object.js.

Note: Since object is an unordered collection of properties, it may produce unexpected results with sequential iterations. Whenever order of the jobs’ execution is important please use serialOrdered method.

Ordered Serial Iterations

TBD

For example compare-property package.

Streaming interface

TBD

Want to Know More?

More examples can be found in test folder.

Or open an issue with questions and/or suggestions.



@datastructures-js/priority-queue

build:? npm npm npm

A performant priority queue implementation using a Heap data structure.



Table of Contents

Install

npm install --save @datastructures-js/priority-queue

API

There are two types of PriorityQueue in this repo: MinPriorityQueue which uses a MinHeap and considers an element with smaller priority number as higher in priority. And MaxPriorityQueue which uses a MaxHeap and cosiders an element with bigger priority number as higher in priority.

require

const { MinPriorityQueue, MaxPriorityQueue } = require('@datastructures-js/priority-queue');

import

import { MinPriorityQueue, MaxPriorityQueue } from '@datastructures-js/priority-queue';

Construction

The constructor can accept a callback to get the priority from the queued element. If not passed, the priortiy should be passed with .enqueue.

Example

// the priority not part of the enqueued element
const patientsQueue = new MinPriorityQueue();

// the priority is a prop of the queued element
const biddersQueue = new MaxPriorityQueue({ priority: (bid) => bid.value });

.enqueue(element[, priority])

adds an element with a priority (number) to the queue. Priority is not required here if a priority callback has been defined in the constructor. If passed here in addition to an existing constructor callback, it will override the callback one.

params
name type
element object
priority number
runtime
O(log(n))

Example

// MinPriorityQueue Example, where priority is the turn for example
patientsQueue.enqueue('patient y', 1); // highest priority
patientsQueue.enqueue('patient z', 3);
patientsQueue.enqueue('patient w', 4); // lowest priority
patientsQueue.enqueue('patient x', 2);

// MaxPriorityQueue Example, where priority is the bid for example. Priority is obtained from the callback.
biddersQueue.enqueue({ name: 'bidder y', value: 1000 }); // lowest priority
biddersQueue.enqueue({ name: 'bidder w', value: 2500 });
biddersQueue.enqueue({ name: 'bidder z', value: 3500 }); // highest priority
biddersQueue.enqueue({ name: 'bidder x', value: 3000 });

.front()

returns the element with highest priority in the queue.

return description
object object literal with “priority” and “element” props
runtime
O(1)

Example

console.log(patientsQueue.front()); // { priority: 1, element: 'patient y' }

console.log(biddersQueue.front()); // { priority: 3500, element: { name: 'bidder z', value: 3500 } }

.back()

returns an element with lowest priority in the queue. If multiple elements exist at the lowest priority, the one that was inserted first will be returned.

return description
object object literal with “priority” and “element” props
runtime
O(1)

Example

patientsQueue.enqueue('patient m', 4); // lowest priority
patientsQueue.enqueue('patient c', 4); // lowest priority
console.log(patientsQueue.back()); // { priority: 4, element: 'patient w' }

biddersQueue.enqueue({ name: 'bidder m', value: 1000 }); // lowest priority
biddersQueue.enqueue({ name: 'bidder c', value: 1000 }); // lowest priority
console.log(biddersQueue.back()); // { priority: 1000, element: { name: 'bidder y', value: 1000 } }

.dequeue()

removes and returns the element with highest priority in the queue.

return description
object object literal with “priority” and “element” props
runtime
O(log(n))

Example

console.log(patientsQueue.dequeue()); // { priority: 1, element: 'patient y' }
console.log(patientsQueue.front()); // { priority: 2, element: 'patient x' }

console.log(biddersQueue.dequeue()); // { priority: 3500, element: { name: 'bidder z', value: 3500 } }
console.log(biddersQueue.front()); // { priority: 3000, element: { name: 'bidder x', value: 3000 } }

.isEmpty()

checks if the queue is empty.

return
boolean
runtime
O(1)

Example

console.log(patientsQueue.isEmpty()); // false

console.log(biddersQueue.isEmpty()); // false

.size()

returns the number of elements in the queue.

return
number
runtime
O(1)

Example

console.log(patientsQueue.size()); // 5

console.log(biddersQueue.size()); // 5

.toArray()

returns a sorted array of elements by their priorities from highest to lowest.

return description
array an array of object literals with “priority” & “element” props
runtime
O(n*log(n))

Example

console.log(patientsQueue.toArray());
/*
[
  { priority: 2, element: 'patient x' },
  { priority: 3, element: 'patient z' },
  { priority: 4, element: 'patient c' },
  { priority: 4, element: 'patient w' },
  { priority: 4, element: 'patient m' }
]
*/

console.log(biddersQueue.toArray());
/*
[
  { priority: 3000, element: { name: 'bidder x', value: 3000 } },
  { priority: 2500, element: { name: 'bidder w', value: 2500 } },
  { priority: 1000, element: { name: 'bidder y', value: 1000 } },
  { priority: 1000, element: { name: 'bidder m', value: 1000 } },
  { priority: 1000, element: { name: 'bidder c', value: 1000 } }
]
*/

.clear()

clears all elements in the queue.

runtime
O(1)

Example

patientsQueue.clear();
console.log(patientsQueue.size()); // 0
console.log(patientsQueue.front()); // null
console.log(patientsQueue.dequeue()); // null

biddersQueue.clear();
console.log(biddersQueue.size()); // 0
console.log(biddersQueue.front()); // null
console.log(biddersQueue.dequeue()); // null

Build

grunt build


JSON5 – JSON for Humans

Build Status Coverage Status

The JSON5 Data Interchange Format (JSON5) is a superset of JSON that aims to alleviate some of the limitations of JSON by expanding its syntax to include some productions from ECMAScript 5.1.

This JavaScript library is the official reference implementation for JSON5 parsing and serialization libraries.

Summary of Features

The following ECMAScript 5.1 features, which are not supported in JSON, have been extended to JSON5.

Objects

Arrays

Strings

Numbers

Comments

White Space

Short Example

{
  // comments
  unquoted: 'and you can quote me on that',
  singleQuotes: 'I can use "double quotes" here',
  lineBreaks: "Look, Mom! \
No \\n's!",
  hexadecimal: 0xdecaf,
  leadingDecimalPoint: .8675309, andTrailing: 8675309.,
  positiveSign: +1,
  trailingComma: 'in objects', andIn: ['arrays',],
  "backwardsCompatible": "with JSON",
}

Specification

For a detailed explanation of the JSON5 format, please read the official specification.

Installation

Node.js

npm install json5
const JSON5 = require('json5')

Browsers

<script src="https://unpkg.com/json5@^1.0.0"></script>

This will create a global JSON5 variable.

API

The JSON5 API is compatible with the JSON API.

JSON5.parse()

Parses a JSON5 string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned.

Syntax

JSON5.parse(text[, reviver])

Parameters

Return value

The object corresponding to the given JSON5 text.

JSON5.stringify()

Converts a JavaScript value to a JSON5 string, optionally replacing values if a replacer function is specified, or optionally including only the specified properties if a replacer array is specified.

Syntax

JSON5.stringify(value[, replacer[, space]]) JSON5.stringify(value[, options])

Parameters

Return value

A JSON5 string representing the value.

Node.js require() JSON5 files

When using Node.js, you can require() JSON5 files by adding the following statement.

require('json5/lib/register')

Then you can load a JSON5 file with a Node.js require() statement. For example:

const config = require('./config.json5')

CLI

Since JSON is more widely used than JSON5, this package includes a CLI for converting JSON5 to JSON and for validating the syntax of JSON5 documents.

Installation

npm install --global json5

Usage

json5 [options] <file>

If <file> is not provided, then STDIN is used.

Options:

Contibuting

Development

git clone https://github.com/json5/json5
cd json5
npm install

When contributing code, please write relevant tests and run npm test and npm run lint before submitting pull requests. Please use an editor that supports EditorConfig.

Issues

To report bugs or request features regarding the JSON5 data format, please submit an issue to the official specification repository.

To report bugs or request features regarding the JavaScript implentation of JSON5, please submit an issue to this repository.

Credits

Assem Kishore founded this project.

Michael Bolin independently arrived at and published some of these same ideas with awesome explanations and detail. Recommended reading: Suggested Improvements to JSON

Douglas Crockford of course designed and built JSON, but his state machine diagrams on the JSON website, as cheesy as it may sound, gave us motivation and confidence that building a new parser to implement these ideas was within reach! The original implementation of JSON5 was also modeled directly off of Doug’s open-source json_parse.js parser. We’re grateful for that clean and well-documented code.

Max Nanasy has been an early and prolific supporter, contributing multiple patches and ideas.

Andrew Eisenberg contributed the original stringify method.

Jordan Tucker has aligned JSON5 more closely with ES5, wrote the official JSON5 specification, completely rewrote the codebase from the ground up, and is actively maintaining this project.



JSON5 – JSON for Humans

Build Status Coverage Status

The JSON5 Data Interchange Format (JSON5) is a superset of JSON that aims to alleviate some of the limitations of JSON by expanding its syntax to include some productions from ECMAScript 5.1.

This JavaScript library is the official reference implementation for JSON5 parsing and serialization libraries.

Summary of Features

The following ECMAScript 5.1 features, which are not supported in JSON, have been extended to JSON5.

Objects

Arrays

Strings

Numbers

Comments

White Space

Short Example

{
  // comments
  unquoted: 'and you can quote me on that',
  singleQuotes: 'I can use "double quotes" here',
  lineBreaks: "Look, Mom! \
No \\n's!",
  hexadecimal: 0xdecaf,
  leadingDecimalPoint: .8675309, andTrailing: 8675309.,
  positiveSign: +1,
  trailingComma: 'in objects', andIn: ['arrays',],
  "backwardsCompatible": "with JSON",
}

Specification

For a detailed explanation of the JSON5 format, please read the official specification.

Installation

Node.js

npm install json5
const JSON5 = require('json5')

Browsers

<script src="https://unpkg.com/json5@^2.0.0/dist/index.min.js"></script>

This will create a global JSON5 variable.

API

The JSON5 API is compatible with the JSON API.

JSON5.parse()

Parses a JSON5 string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned.

Syntax

JSON5.parse(text[, reviver])

Parameters

Return value

The object corresponding to the given JSON5 text.

JSON5.stringify()

Converts a JavaScript value to a JSON5 string, optionally replacing values if a replacer function is specified, or optionally including only the specified properties if a replacer array is specified.

Syntax

JSON5.stringify(value[, replacer[, space]]) JSON5.stringify(value[, options])

Parameters

Return value

A JSON5 string representing the value.

Node.js require() JSON5 files

When using Node.js, you can require() JSON5 files by adding the following statement.

require('json5/lib/register')

Then you can load a JSON5 file with a Node.js require() statement. For example:

const config = require('./config.json5')

CLI

Since JSON is more widely used than JSON5, this package includes a CLI for converting JSON5 to JSON and for validating the syntax of JSON5 documents.

Installation

npm install --global json5

Usage

json5 [options] <file>

If <file> is not provided, then STDIN is used.

Options:

Contributing

Development

git clone https://github.com/json5/json5
cd json5
npm install

When contributing code, please write relevant tests and run npm test and npm run lint before submitting pull requests. Please use an editor that supports EditorConfig.

Issues

To report bugs or request features regarding the JSON5 data format, please submit an issue to the official specification repository.

To report bugs or request features regarding the JavaScript implementation of JSON5, please submit an issue to this repository.

Credits

Assem Kishore founded this project.

Michael Bolin independently arrived at and published some of these same ideas with awesome explanations and detail. Recommended reading: Suggested Improvements to JSON

Douglas Crockford of course designed and built JSON, but his state machine diagrams on the JSON website, as cheesy as it may sound, gave us motivation and confidence that building a new parser to implement these ideas was within reach! The original implementation of JSON5 was also modeled directly off of Doug’s open-source json_parse.js parser. We’re grateful for that clean and well-documented code.

Max Nanasy has been an early and prolific supporter, contributing multiple patches and ideas.

Andrew Eisenberg contributed the original stringify method.

Jordan Tucker has aligned JSON5 more closely with ES5, wrote the official JSON5 specification, completely rewrote the codebase from the ground up, and is actively maintaining this project.



split-string NPM version NPM monthly downloads NPM total downloads Linux Build Status

Split a string on a character except when the character is escaped.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save split-string

Why use this?


Although it’s easy to split on a string:

It’s more challenging to split a string whilst respecting escaped or quoted characters.

Bad

Good

See the options to learn how to choose the separator or retain quotes or escaping.


Usage

var split = require('split-string');

split('a.b.c');
//=> ['a', 'b', 'c']

// respects escaped characters
split('a.b.c\\.d');
//=> ['a', 'b', 'c.d']

// respects double-quoted strings
split('a."b.c.d".e');
//=> ['a', 'b.c.d', 'e']

Brackets

Also respects brackets unless disabled:

split('a (b c d) e', ' ');
//=> ['a', '(b c d)', 'e']

Options

options.brackets

Type: object|boolean

Default: undefined

Description

If enabled, split-string will not split inside brackets. The following brackets types are supported when options.brackets is true,

{
  '<': '>',
  '(': ')',
  '[': ']',
  '{': '}'
}

Or, if object of brackets must be passed, each property on the object must be a bracket type, where the property key is the opening delimiter and property value is the closing delimiter.

Examples

// no bracket support by default
split('a.{b.c}');
//=> [ 'a', '{b', 'c}' ]

// support all basic bracket types: "<>{}[]()"
split('a.{b.c}', {brackets: true});
//=> [ 'a', '{b.c}' ]

// also supports nested brackets 
split('a.{b.{c.d}.e}.f', {brackets: true});
//=> [ 'a', '{b.{c.d}.e}', 'f' ]

// support only the specified brackets
split('[a.b].(c.d)', {brackets: {'[': ']'}});
//=> [ '[a.b]', '(c', 'd)' ]

options.sep

Type: string

Default: .

The separator/character to split on.

Example

split('a.b,c', {sep: ','});
//=> ['a.b', 'c']

// you can also pass the separator as string as the last argument
split('a.b,c', ',');
//=> ['a.b', 'c']

options.keepEscaping

Type: boolean

Default: undefined

Keep backslashes in the result.

Example

split('a.b\\.c');
//=> ['a', 'b.c']

split('a.b.\\c', {keepEscaping: true});
//=> ['a', 'b\.c']

options.keepQuotes

Type: boolean

Default: undefined

Keep single- or double-quotes in the result.

Example

split('a."b.c.d".e');
//=> ['a', 'b.c.d', 'e']

split('a."b.c.d".e', {keepQuotes: true});
//=> ['a', '"b.c.d"', 'e']

split('a.\'b.c.d\'.e', {keepQuotes: true});
//=> ['a', '\'b.c.d\'', 'e']

options.keepDoubleQuotes

Type: boolean

Default: undefined

Keep double-quotes in the result.

Example

split('a."b.c.d".e');
//=> ['a', 'b.c.d', 'e']

split('a."b.c.d".e', {keepDoubleQuotes: true});
//=> ['a', '"b.c.d"', 'e']

options.keepSingleQuotes

Type: boolean

Default: undefined

Keep single-quotes in the result.

Example

split('a.\'b.c.d\'.e');
//=> ['a', 'b.c.d', 'e']

split('a.\'b.c.d\'.e', {keepSingleQuotes: true});
//=> ['a', '\'b.c.d\'', 'e']

Customizer

Type: function

Default: undefined

Pass a function as the last argument to customize how tokens are added to the array.

Example

var arr = split('a.b', function(tok) {
  if (tok.arr[tok.arr.length - 1] === 'a') {
    tok.split = false;
  }
});
console.log(arr);
//=> ['a.b']

Properties

The tok object has the following properties:

Release history

v3.0.0 - 2017-06-17

Added

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
28 jonschlinkert
9 doowb

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on November 19, 2017. # serve-static

NPM Version NPM Downloads Linux Build Windows Build Test Coverage

Install

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

$ npm install serve-static

API

var serveStatic = require('serve-static')

serveStatic(root, options)

Create a new middleware function to serve files from within a given root directory. The file to serve will be determined by combining req.url with the provided root directory. When a file is not found, instead of sending a 404 response, this module will instead call next() to move on to the next middleware, allowing for stacking and fall-backs.

Options

acceptRanges

Enable or disable accepting ranged requests, defaults to true. Disabling this will not send Accept-Ranges and ignore the contents of the Range request header.

cacheControl

Enable or disable setting Cache-Control response header, defaults to true. Disabling this will ignore the immutable and maxAge options.

dotfiles

Set how “dotfiles” are treated when encountered. A dotfile is a file or directory that begins with a dot (“.”). Note this check is done on the path itself without checking if the path actually exists on the disk. If root is specified, only the dotfiles above the root are checked (i.e. the root itself can be within a dotfile when set to “deny”).

The default value is similar to 'ignore', with the exception that this default will not ignore the files within a directory that begins with a dot.

etag

Enable or disable etag generation, defaults to true.

extensions

Set file extension fallbacks. When set, if a file is not found, the given extensions will be added to the file name and search for. The first that exists will be served. Example: ['html', 'htm'].

The default value is false.

fallthrough

Set the middleware to have client errors fall-through as just unhandled requests, otherwise forward a client error. The difference is that client errors like a bad request or a request to a non-existent file will cause this middleware to simply next() to your next middleware when this value is true. When this value is false, these errors (even 404s), will invoke next(err).

Typically true is desired such that multiple physical directories can be mapped to the same web address or for routes to fill in non-existent files.

The value false can be used if this middleware is mounted at a path that is designed to be strictly a single file system directory, which allows for short-circuiting 404s for less overhead. This middleware will also reply to all methods.

The default value is true.

immutable

Enable or disable the immutable directive in the Cache-Control response header, defaults to false. If set to true, the maxAge option should also be specified to enable caching. The immutable directive will prevent supported clients from making conditional requests during the life of the maxAge option to check if the file has changed.

index

By default this module will send “index.html” files in response to a request on a directory. To disable this set false or to supply a new index pass a string or an array in preferred order.

lastModified

Enable or disable Last-Modified header, defaults to true. Uses the file system’s last modified value.

maxAge

Provide a max-age in milliseconds for http caching, defaults to 0. This can also be a string accepted by the ms module.

redirect

Redirect to trailing “/” when the pathname is a dir. Defaults to true.

setHeaders

Function to set custom headers on response. Alterations to the headers need to occur synchronously. The function is called as fn(res, path, stat), where the arguments are:

Examples

Serve files with vanilla node.js http server

var finalhandler = require('finalhandler')
var http = require('http')
var serveStatic = require('serve-static')

// Serve up public/ftp folder
var serve = serveStatic('public/ftp', { 'index': ['index.html', 'index.htm'] })

// Create server
var server = http.createServer(function onRequest (req, res) {
  serve(req, res, finalhandler(req, res))
})

// Listen
server.listen(3000)

Serve all files as downloads

var contentDisposition = require('content-disposition')
var finalhandler = require('finalhandler')
var http = require('http')
var serveStatic = require('serve-static')

// Serve up public/ftp folder
var serve = serveStatic('public/ftp', {
  'index': false,
  'setHeaders': setHeaders
})

// Set header to force download
function setHeaders (res, path) {
  res.setHeader('Content-Disposition', contentDisposition(path))
}

// Create server
var server = http.createServer(function onRequest (req, res) {
  serve(req, res, finalhandler(req, res))
})

// Listen
server.listen(3000)

Serving using express

Simple

This is a simple example of using Express.

var express = require('express')
var serveStatic = require('serve-static')

var app = express()

app.use(serveStatic('public/ftp', { 'index': ['default.html', 'default.htm'] }))
app.listen(3000)

Multiple roots

This example shows a simple way to search through multiple directories. Files are look for in public-optimized/ first, then public/ second as a fallback.

var express = require('express')
var path = require('path')
var serveStatic = require('serve-static')

var app = express()

app.use(serveStatic(path.join(__dirname, 'public-optimized')))
app.use(serveStatic(path.join(__dirname, 'public')))
app.listen(3000)

Different settings for paths

This example shows how to set a different max age depending on the served file type. In this example, HTML files are not cached, while everything else is for 1 day.

var express = require('express')
var path = require('path')
var serveStatic = require('serve-static')

var app = express()

app.use(serveStatic(path.join(__dirname, 'public'), {
  maxAge: '1d',
  setHeaders: setCustomCacheControl
}))

app.listen(3000)

function setCustomCacheControl (res, path) {
  if (serveStatic.mime.lookup(path) === 'text/html') {
    // Custom Cache-Control for HTML files
    res.setHeader('Cache-Control', 'public, max-age=0')
  }
}


class-utils NPM version NPM monthly downloads NPM total downloads Linux Build Status

Utils for working with JavaScript classes and prototype methods.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save class-utils

Usage

var cu = require('class-utils');

API

.has

Returns true if an array has any of the given elements, or an object has any of the give keys.

Params

Example

cu.has(['a', 'b', 'c'], 'c');
//=> true

cu.has(['a', 'b', 'c'], ['c', 'z']);
//=> true

cu.has({a: 'b', c: 'd'}, ['c', 'z']);
//=> true

.hasAll

Returns true if an array or object has all of the given values.

Params

Example

cu.hasAll(['a', 'b', 'c'], 'c');
//=> true

cu.hasAll(['a', 'b', 'c'], ['c', 'z']);
//=> false

cu.hasAll({a: 'b', c: 'd'}, ['c', 'z']);
//=> false

.arrayify

Cast the given value to an array.

Params

Example

cu.arrayify('foo');
//=> ['foo']

cu.arrayify(['foo']);
//=> ['foo']

.hasConstructor

Returns true if a value has a contructor

Params

Example

cu.hasConstructor({});
//=> true

cu.hasConstructor(Object.create(null));
//=> false

.nativeKeys

Get the native ownPropertyNames from the constructor of the given object. An empty array is returned if the object does not have a constructor.

Params

Example

cu.nativeKeys({a: 'b', b: 'c', c: 'd'})
//=> ['a', 'b', 'c']

cu.nativeKeys(function(){})
//=> ['length', 'caller']

.getDescriptor

Returns property descriptor key if it’s an “own” property of the given object.

Params

Example

function App() {}
Object.defineProperty(App.prototype, 'count', {
  get: function() {
    return Object.keys(this).length;
  }
});
cu.getDescriptor(App.prototype, 'count');
// returns:
// {
//   get: [Function],
//   set: undefined,
//   enumerable: false,
//   configurable: false
// }

.copyDescriptor

Copy a descriptor from one object to another.

Params

Example

function App() {}
Object.defineProperty(App.prototype, 'count', {
  get: function() {
    return Object.keys(this).length;
  }
});
var obj = {};
cu.copyDescriptor(obj, App.prototype, 'count');

.copy

Copy static properties, prototype properties, and descriptors from one object to another.

Params

.inherit

Inherit the static properties, prototype properties, and descriptors from of an object.

Params

.extend

Returns a function for extending the static properties, prototype properties, and descriptors from the Parent constructor onto Child constructors.

Params

Example

var extend = cu.extend(Parent);
Parent.extend(Child);

// optional methods
Parent.extend(Child, {
  foo: function() {},
  bar: function() {}
});

.bubble

Bubble up events emitted from static methods on the Parent ctor.

Params

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
34 jonschlinkert
8 doowb
2 wtgtybhertgeghgtwtg

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on January 11, 2018.



uuid Build Status

Simple, fast generation of RFC4122 UUIDS.

Features:

[Deprecation warning: The use of require('uuid') is deprecated and will not be supported after version 3.x of this module. Instead, use require('uuid/[v1|v3|v4|v5]') as shown in the examples below.]

npm install uuid

Then generate your uuid version of choice …

Version 1 (timestamp):

const uuidv1 = require('uuid/v1');
uuidv1(); // ⇨ '2c5ea4c0-4067-11e9-8bad-9b1deb4d3b7d'

Version 3 (namespace):

const uuidv3 = require('uuid/v3');

// ... using predefined DNS namespace (for domain names)
uuidv3('hello.example.com', uuidv3.DNS); // ⇨ '9125a8dc-52ee-365b-a5aa-81b0b3681cf6'

// ... using predefined URL namespace (for, well, URLs)
uuidv3('http://example.com/hello', uuidv3.URL); // ⇨ 'c6235813-3ba4-3801-ae84-e0a6ebb7d138'

// ... using a custom namespace
//
// Note: Custom namespaces should be a UUID string specific to your application!
// E.g. the one here was generated using this modules `uuid` CLI.
const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341';
uuidv3('Hello, World!', MY_NAMESPACE); // ⇨ 'e8b5a51d-11c8-3310-a6ab-367563f20686'

Version 4 (random):

const uuidv4 = require('uuid/v4');
uuidv4(); // ⇨ '1b9d6bcd-bbfd-4b2d-9b5d-ab8dfbbd4bed'

Version 5 (namespace):

const uuidv5 = require('uuid/v5');

// ... using predefined DNS namespace (for domain names)
uuidv5('hello.example.com', uuidv5.DNS); // ⇨ 'fdda765f-fc57-5604-a269-52a7df8164ec'

// ... using predefined URL namespace (for, well, URLs)
uuidv5('http://example.com/hello', uuidv5.URL); // ⇨ '3bbcee75-cecc-5b56-8031-b6641c1ed1f1'

// ... using a custom namespace
//
// Note: Custom namespaces should be a UUID string specific to your application!
// E.g. the one here was generated using this modules `uuid` CLI.
const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341';
uuidv5('Hello, World!', MY_NAMESPACE); // ⇨ '630eb68f-e0fa-5ecc-887a-7c7a62614681'

API

Version 1

const uuidv1 = require('uuid/v1');

// Incantations
uuidv1();
uuidv1(options);
uuidv1(options, buffer, offset);

Generate and return a RFC4122 v1 (timestamp-based) UUID.

Returns buffer, if specified, otherwise the string form of the UUID

Note: The default node id (the last 12 digits in the UUID) is generated once, randomly, on process startup, and then remains unchanged for the duration of the process.

Example: Generate string UUID with fully-specified options

const v1options = {
  node: [0x01, 0x23, 0x45, 0x67, 0x89, 0xab],
  clockseq: 0x1234,
  msecs: new Date('2011-11-01').getTime(),
  nsecs: 5678
};
uuidv1(v1options); // ⇨ '710b962e-041c-11e1-9234-0123456789ab'

Example: In-place generation of two binary IDs

// Generate two ids in an array
const arr = new Array();
uuidv1(null, arr, 0);  // ⇨ 
  // [
  //    44,  94, 164, 192,  64, 103,
  //    17, 233, 146,  52, 155,  29,
  //   235,  77,  59, 125
  // ]
uuidv1(null, arr, 16); // ⇨ 
  // [
  //    44, 94, 164, 192,  64, 103, 17, 233,
  //   146, 52, 155,  29, 235,  77, 59, 125,
  //    44, 94, 164, 193,  64, 103, 17, 233,
  //   146, 52, 155,  29, 235,  77, 59, 125
  // ]

Version 3

const uuidv3 = require('uuid/v3');

// Incantations
uuidv3(name, namespace);
uuidv3(name, namespace, buffer);
uuidv3(name, namespace, buffer, offset);

Generate and return a RFC4122 v3 UUID.

Returns buffer, if specified, otherwise the string form of the UUID

Example:

uuidv3('hello world', MY_NAMESPACE);  // ⇨ '042ffd34-d989-321c-ad06-f60826172424'

Version 4

const uuidv4 = require('uuid/v4')

// Incantations
uuidv4();
uuidv4(options);
uuidv4(options, buffer, offset);

Generate and return a RFC4122 v4 UUID.

Returns buffer, if specified, otherwise the string form of the UUID

Example: Generate string UUID with predefined random values

const v4options = {
  random: [
    0x10, 0x91, 0x56, 0xbe, 0xc4, 0xfb, 0xc1, 0xea,
    0x71, 0xb4, 0xef, 0xe1, 0x67, 0x1c, 0x58, 0x36
  ]
};
uuidv4(v4options); // ⇨ '109156be-c4fb-41ea-b1b4-efe1671c5836'

Example: Generate two IDs in a single buffer

const buffer = new Array();
uuidv4(null, buffer, 0);  // ⇨ 
  // [
  //   155, 29, 235,  77,  59,
  //   125, 75, 173, 155, 221,
  //    43, 13, 123,  61, 203,
  //   109
  // ]
uuidv4(null, buffer, 16); // ⇨ 
  // [
  //   155,  29, 235,  77,  59, 125,  75, 173,
  //   155, 221,  43,  13, 123,  61, 203, 109,
  //    27, 157, 107, 205, 187, 253,  75,  45,
  //   155,  93, 171, 141, 251, 189,  75, 237
  // ]

Version 5

const uuidv5 = require('uuid/v5');

// Incantations
uuidv5(name, namespace);
uuidv5(name, namespace, buffer);
uuidv5(name, namespace, buffer, offset);

Generate and return a RFC4122 v5 UUID.

Returns buffer, if specified, otherwise the string form of the UUID

Example:

uuidv5('hello world', MY_NAMESPACE);  // ⇨ '9f282611-e0fd-5650-8953-89c8e342da0b'

Command Line

UUIDs can be generated from the command line with the uuid command.

$ uuid
ddeb27fb-d9a0-4624-be4d-4615062daed4

$ uuid v1
02d37060-d446-11e7-a9fa-7bdae751ebe1

Type uuid --help for usage details

Testing

npm test
Markdown generated from README_js.md by RunMD Logo # cache-base NPM version NPM monthly downloads NPM total downloads Linux Build Status
> Basic object cache with get, set, del, and has methods for node.js/javascript projects.
## Install
Install with npm:
sh npm install --save cache-base
## Usage
```js var Cache = require(‘cache-base’);
// instantiate var app = new Cache();
// set values app.set(‘a’, ‘b’); app.set(‘c.d’, ‘e’);
// get values app.get(‘a’); //=> ‘b’ app.get(‘c’); //=> {d: ‘e’}
console.log(app.cache); //=> {a: ‘b’} ```
Inherit
```js var util = require(‘util’); var Cache = require(‘cache-base’);
function MyApp() { Cache.call(this); } util.inherits(MyApp, Cache);
var app = new MyApp(); app.set(‘a’, ‘b’); app.get(‘a’); //=> ‘b’ ```
Namespace
Define a custom property for storing values.
js var Cache = require('cache-base').namespace('data'); var app = new Cache(); app.set('a', 'b'); console.log(app.data); //=> {a: 'b'}
## API
### namespace
Create a Cache constructor that when instantiated will store values on the given prop.
Params
* prop {String}: The property name to use for storing values. * returns {Function}: Returns a custom Cache constructor
Example
```js var Cache = require(‘cache-base’).namespace(‘data’); var cache = new Cache();
cache.set(‘foo’, ‘bar’); //=> {data: {foo: ‘bar’}} ```
### Cache
Create a new Cache. Internally the Cache constructor is created using the namespace function, with cache defined as the storage object.
Params
* cache {Object}: Optionally pass an object to initialize with.
Example
js var app = new Cache();
### .set
Assign value to key. Also emits set with the key and value.
Params
* key {String} * value {any} * returns {Object}: Returns the instance for chaining.
Events
* emits: set with key and value as arguments.
Example
``js app.on('set', function(key, val) { // do something whenset` is emitted });
app.set(key, value);
// also takes an object or array app.set({name: ‘Halle’}); app.set([{foo: ‘bar’}, {baz: ‘quux’}]); console.log(app); //=> {name: ‘Halle’, foo: ‘bar’, baz: ‘quux’} ```
### .union
Union array to key. Also emits set with the key and value.
Params
* key {String} * value {any} * returns {Object}: Returns the instance for chaining.
Example
js app.union('a.b', ['foo']); app.union('a.b', ['bar']); console.log(app.get('a')); //=> {b: ['foo', 'bar']}
### .get
Return the value of key. Dot notation may be used to get nested property values.
Params
* key {String}: The name of the property to get. Dot-notation may be used. * returns {any}: Returns the value of key
Events
* emits: get with key and value as arguments.
Example
```js app.set(‘a.b.c’, ‘d’); app.get(‘a.b’); //=> {c: ‘d’}
app.get([‘a’, ‘b’]); //=> {c: ‘d’} ```
### .has
Return true if app has a stored value for key, false only if value is undefined.
Params
* key {String} * returns {Boolean}
Events
* emits: has with key and true or false as arguments.
Example
js app.set('foo', 'bar'); app.has('foo'); //=> true
### .del
Delete one or more properties from the instance.
Params
* key {String|Array}: Property name or array of property names. * returns {Object}: Returns the instance for chaining.
Events
* emits: del with the key as the only argument.
Example
js app.del(); // delete all // or app.del('foo'); // or app.del(['foo', 'bar']);
### .clear
Reset the entire cache to an empty object.
Example
js app.clear();
### .visit
Visit method over the properties in the given object, or map visit over the object-elements in an array.
Params
* method {String}: The name of the base method to call. * val {Object|Array}: The object or array to iterate over. * returns {Object}: Returns the instance for chaining.
## About
### Related projects
* base-methods: base-methods is the foundation for creating modular, unit testable and highly pluggable node.js applications, starting… more | homepage * get-value: Use property paths (a.b.c) to get a nested value from an object. | homepage * has-value: Returns true if a value exists, false if empty. Works with deeply nested values using… more | homepage * option-cache: Simple API for managing options in JavaScript applications. | homepage * set-value: Create nested values and any intermediaries using dot notation ('a.b.c') paths. | homepage * unset-value: Delete nested properties from an object using dot notation. | homepage
### Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor | | — | — | | 54 | jonschlinkert | | 2 | wtgtybhertgeghgtwtg |
### Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
sh npm install -g verbose/verb#dev verb-generate-readme && verb
### Running tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
sh npm install && npm test
### Author
Jon Schlinkert
* github/jonschlinkert * twitter/jonschlinkert
***
This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # gcs-resumable-upload > Upload a file to Google Cloud Storage with built-in resumable behavior
sh npm install gcs-resumable-upload ```js const {upload} = require(‘gcs-resumable-upload’); const fs = require(‘fs’);
Or from the command line:
If somewhere during the operation, you lose your connection to the internet or your tough-guy brother slammed your laptop shut when he saw what you were uploading, the next time you try to upload to that file, it will resume automatically from where you left off.
## How it works
This module stores a file using ConfigStore that is written to when you first start an upload. It is aliased by the file name you are uploading to and holds the first 16kb chunk of data* as well as the unique resumable upload URI. (Resumable uploads are complicated)
If your upload was interrupted, next time you run the code, we ask the API how much data it has already, then simply dump all of the data coming through the pipe that it already has.
After the upload completes, the entry in the config file is removed. Done!
* The first 16kb chunk is stored to validate if you are sending the same data when you resume the upload. If not, a new resumable upload is started with the new data.
## Authentication
Oh, right. This module uses google-auth-library and accepts all of the configuration that module does to strike up a connection as config.authConfig. See authConfig.
## API
js const {gcsResumableUpload} = require('gcs-resumable-upload') const upload = gcsResumableUpload(config)
upload is an instance of Duplexify.

### Methods

upload.createURI(callback)

callback(err, resumableURI)
callback.err

Invoked if the authorization failed or the request to start a resumable session failed.

callback.resumableURI

The resumable upload session URI.

upload.deleteConfig()

This will remove the config data associated with the provided file.

### Events

.on(‘error’, function (err) {})

err

Invoked if the authorization failed, the request failed, or the file wasn’t successfully uploaded.

.on(‘response’, function (response) {})

resp

The response object from Gaxios.

metadata

The file’s new metadata.

.on(‘progress’, function (progress) {})

progress

progress.bytesWritten
progress.contentLength

Progress event provides upload stats like Transferred Bytes and content length.

.on(‘finish’, function () {})

The file was uploaded successfully.

Here we cover the most ‘useful’ methods. If you need advanced details (creating your own tags), see wiki and examples for more info.

const yaml = require('js-yaml');
const fs   = require('fs');

// Get document, or throw exception on error
try {
  const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8'));
  console.log(doc);
} catch (e) {
  console.log(e);
}

safeLoad (string [ , options ])

Recommended loading way. Parses string as single YAML document. Returns either a plain object, a string or undefined, or throws YAMLException on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data.

options:

NOTE: This function does not understand multi-document sources, it throws exception on those.

NOTE: JS-YAML does not support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use Null and NULL as null, etc. The core schema also has no such restrictions. It allows binary notation for integers.

load (string [ , options ])

Use with care with untrusted sources. The same as safeLoad() but uses DEFAULT_FULL_SCHEMA by default - adds some JavaScript-specific types: !!js/function, !!js/regexp and !!js/undefined. For untrusted sources, you must additionally validate object structure to avoid injections:

const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"';

// I'm just converting that string, what could possibly go wrong?
require('js-yaml').load(untrusted_code) + ''

safeLoadAll (string [, iterator] [, options ])

Same as safeLoad(), but understands multi-document sources. Applies iterator to each document if specified, or returns array of documents.

const yaml = require('js-yaml');

yaml.safeLoadAll(data, function (doc) {
  console.log(doc);
});

loadAll (string [, iterator] [ , options ])

Same as safeLoadAll() but uses DEFAULT_FULL_SCHEMA by default.

safeDump (object [ , options ])

Serializes object as a YAML document. Uses DEFAULT_SAFE_SCHEMA, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the skipInvalid option to true.

options:

The following table show availlable styles (e.g. “canonical”, “binary”…) available for each tag (.e.g. !!null, !!int …). Yaml output is shown on the right side after => (default setting) or ->:

!!null
  "canonical"   -> "~"
  "lowercase"   => "null"
  "uppercase"   -> "NULL"
  "camelcase"   -> "Null"

!!int
  "binary"      -> "0b1", "0b101010", "0b1110001111010"
  "octal"       -> "01", "052", "016172"
  "decimal"     => "1", "42", "7290"
  "hexadecimal" -> "0x1", "0x2A", "0x1C7A"

!!bool
  "lowercase"   => "true", "false"
  "uppercase"   -> "TRUE", "FALSE"
  "camelcase"   -> "True", "False"

!!float
  "lowercase"   => ".nan", '.inf'
  "uppercase"   -> ".NAN", '.INF'
  "camelcase"   -> ".NaN", '.Inf'

Example:

safeDump (object, {
  'styles': {
    '!!null': 'canonical' // dump null as ~
  },
  'sortKeys': true        // sort object keys
});

dump (object [ , options ])

Same as safeDump() but without limits (uses DEFAULT_FULL_SCHEMA by default).

The list of standard YAML tags and corresponding JavaScipt types. See also YAML tag discussion and YAML types repository.

!!null ''                   # null
!!bool 'yes'                # bool
!!int '3...'                # number
!!float '3.14...'           # number
!!binary '...base64...'     # buffer
!!timestamp 'YYYY-...'      # date
!!omap [ ... ]              # array of key-value pairs
!!pairs [ ... ]             # array or array pairs
!!set { ... }               # array of objects with given keys and null values
!!str '...'                 # string
!!seq [ ... ]               # array
!!map { ... }               # object

JavaScript-specific tags

!!js/regexp /pattern/gim            # RegExp
!!js/undefined ''                   # Undefined
!!js/function 'function () {...}'   # Function

Caveats

Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling toString() method) them at the moment of adding them.

---
? [ foo, bar ]
: - baz
? { foo: bar }
: - baz
  - baz
{ "foo,bar": ["baz"], "[object Object]": ["baz", "baz"] }

Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded.

&anchor foo:
  foo: bar
  *anchor: duplicate key
  baz: bat
  *anchor: duplicate key

js-yaml for enterprise

Available as part of the Tidelift Subscription

The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.



keyv

Simple key-value storage with support for multiple backends

Build Status Coverage Status npm npm

Keyv provides a consistent interface for key-value storage across multiple backends via storage adapters. It supports TTL based expiry, making it suitable as a cache or a persistent key-value store.

Features

There are a few existing modules similar to Keyv, however Keyv is different because it:

Usage

Install Keyv.

npm install --save keyv

By default everything is stored in memory, you can optionally also install a storage adapter.

npm install --save @keyv/redis
npm install --save @keyv/mongo
npm install --save @keyv/sqlite
npm install --save @keyv/postgres
npm install --save @keyv/mysql

Create a new Keyv instance, passing your connection string if applicable. Keyv will automatically load the correct storage adapter.

const Keyv = require('keyv');

// One of the following
const keyv = new Keyv();
const keyv = new Keyv('redis://user:pass@localhost:6379');
const keyv = new Keyv('mongodb://user:pass@localhost:27017/dbname');
const keyv = new Keyv('sqlite://path/to/database.sqlite');
const keyv = new Keyv('postgresql://user:pass@localhost:5432/dbname');
const keyv = new Keyv('mysql://user:pass@localhost:3306/dbname');

// Handle DB connection errors
keyv.on('error', err => console.log('Connection Error', err));

await keyv.set('foo', 'expires in 1 second', 1000); // true
await keyv.set('foo', 'never expires'); // true
await keyv.get('foo'); // 'never expires'
await keyv.delete('foo'); // true
await keyv.clear(); // undefined

Namespaces

You can namespace your Keyv instance to avoid key collisions and allow you to clear only a certain namespace while using the same database.

const users = new Keyv('redis://user:pass@localhost:6379', { namespace: 'users' });
const cache = new Keyv('redis://user:pass@localhost:6379', { namespace: 'cache' });

await users.set('foo', 'users'); // true
await cache.set('foo', 'cache'); // true
await users.get('foo'); // 'users'
await cache.get('foo'); // 'cache'
await users.clear(); // undefined
await users.get('foo'); // undefined
await cache.get('foo'); // 'cache'

Custom Serializers

Keyv uses json-buffer for data serialization to ensure consistency across different backends.

You can optionally provide your own serialization functions to support extra data types or to serialize to something other than JSON.

const keyv = new Keyv({ serialize: JSON.stringify, deserialize: JSON.parse });

Warning: Using custom serializers means you lose any guarantee of data consistency. You should do extensive testing with your serialisation functions and chosen storage engine.

Official Storage Adapters

The official storage adapters are covered by over 150 integration tests to guarantee consistent behaviour. They are lightweight, efficient wrappers over the DB clients making use of indexes and native TTLs where available.

Database Adapter Native TTL Status
Redis [@keyv/redis](https://github.com/lukechilds/keyv-redis) Yes Build Status Coverage Status
MongoDB [@keyv/mongo](https://github.com/lukechilds/keyv-mongo) Yes Build Status Coverage Status
SQLite [@keyv/sqlite](https://github.com/lukechilds/keyv-sqlite) No Build Status Coverage Status
PostgreSQL [@keyv/postgres](https://github.com/lukechilds/keyv-postgres) No Build Status Coverage Status
MySQL [@keyv/mysql](https://github.com/lukechilds/keyv-mysql) No Build Status Coverage Status

Third-party Storage Adapters

You can also use third-party storage adapters or build your own. Keyv will wrap these storage adapters in TTL functionality and handle complex types internally.

const Keyv = require('keyv');
const myAdapter = require('./my-storage-adapter');

const keyv = new Keyv({ store: myAdapter });

Any store that follows the Map api will work.

new Keyv({ store: new Map() });

For example, quick-lru is a completely unrelated module that implements the Map API.

const Keyv = require('keyv');
const QuickLRU = require('quick-lru');

const lru = new QuickLRU({ maxSize: 1000 });
const keyv = new Keyv({ store: lru });

The following are third-party storage adapters compatible with Keyv:

Keyv is designed to be easily embedded into other modules to add cache support. The recommended pattern is to expose a cache option in your modules options which is passed through to Keyv. Caching will work in memory by default and users have the option to also install a Keyv storage adapter and pass in a connection string, or any other storage that implements the Map API.

You should also set a namespace for your module so you can safely call .clear() without clearing unrelated app data.

Inside your module:

class AwesomeModule {
    constructor(opts) {
        this.cache = new Keyv({
            uri: typeof opts.cache === 'string' && opts.cache,
            store: typeof opts.cache !== 'string' && opts.cache,
            namespace: 'awesome-module'
        });
    }
}

Now it can be consumed like this:

const AwesomeModule = require('awesome-module');

// Caches stuff in memory by default
const awesomeModule = new AwesomeModule();

// After npm install --save keyv-redis
const awesomeModule = new AwesomeModule({ cache: 'redis://localhost' });

// Some third-party module that implements the Map API
const awesomeModule = new AwesomeModule({ cache: some3rdPartyStore });

API

new Keyv(uri, options)

Returns a new Keyv instance.

The Keyv instance is also an EventEmitter that will emit an 'error' event if the storage adapter connection fails.

uri

Type: String
Default: undefined

The connection string URI.

Merged into the options object as options.uri.

options

Type: Object

The options object is also passed through to the storage adapter. Check your storage adapter docs for any extra options.

options.namespace

Type: String
Default: 'keyv'

Namespace for the current instance.

options.ttl

Type: Number
Default: undefined

Default TTL. Can be overridden by specififying a TTL on .set().

options.serialize

Type: Function
Default: JSONB.stringify

A custom serialization function.

options.deserialize

Type: Function
Default: JSONB.parse

A custom deserialization function.

options.store

Type: Storage adapter instance
Default: new Map()

The storage adapter instance to be used by Keyv.

options.adapter

Type: String
Default: undefined

Specify an adapter to use. e.g 'redis' or 'mongodb'.

Instance

Keys must always be strings. Values can be of any type.

.set(key, value, [ttl])

Set a value.

By default keys are persistent. You can set an expiry TTL in milliseconds.

Returns true.

.get(key)

Returns the value.

.delete(key)

Deletes an entry.

Returns true if the key existed, false if not.

.clear()

Delete all entries in the current namespace.

Returns undefined.



kind-of NPM version NPM monthly downloads NPM total downloads Linux Build Status

Get the native type of a value.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

$ npm install --save kind-of

Install with bower

$ bower install kind-of --save

Why use this?

  1. it’s fast | optimizations
  2. better type checking

Usage

es5, browser and es6 ready

var kindOf = require('kind-of');

kindOf(undefined);
//=> 'undefined'

kindOf(null);
//=> 'null'

kindOf(true);
//=> 'boolean'

kindOf(false);
//=> 'boolean'

kindOf(new Boolean(true));
//=> 'boolean'

kindOf(new Buffer(''));
//=> 'buffer'

kindOf(42);
//=> 'number'

kindOf(new Number(42));
//=> 'number'

kindOf('str');
//=> 'string'

kindOf(new String('str'));
//=> 'string'

kindOf(arguments);
//=> 'arguments'

kindOf({});
//=> 'object'

kindOf(Object.create(null));
//=> 'object'

kindOf(new Test());
//=> 'object'

kindOf(new Date());
//=> 'date'

kindOf([]);
//=> 'array'

kindOf([1, 2, 3]);
//=> 'array'

kindOf(new Array());
//=> 'array'

kindOf(/foo/);
//=> 'regexp'

kindOf(new RegExp('foo'));
//=> 'regexp'

kindOf(function () {});
//=> 'function'

kindOf(function * () {});
//=> 'function'

kindOf(new Function());
//=> 'function'

kindOf(new Map());
//=> 'map'

kindOf(new WeakMap());
//=> 'weakmap'

kindOf(new Set());
//=> 'set'

kindOf(new WeakSet());
//=> 'weakset'

kindOf(Symbol('str'));
//=> 'symbol'

kindOf(new Int8Array());
//=> 'int8array'

kindOf(new Uint8Array());
//=> 'uint8array'

kindOf(new Uint8ClampedArray());
//=> 'uint8clampedarray'

kindOf(new Int16Array());
//=> 'int16array'

kindOf(new Uint16Array());
//=> 'uint16array'

kindOf(new Int32Array());
//=> 'int32array'

kindOf(new Uint32Array());
//=> 'uint32array'

kindOf(new Float32Array());
//=> 'float32array'

kindOf(new Float64Array());
//=> 'float64array'

Release history

v4.0.0

Added

v5.0.0

Added

Fixed

Benchmarks

Benchmarked against typeof and type-of. Note that performaces is slower for es6 features Map, WeakMap, Set and WeakSet.

#1: array
  current x 23,329,397 ops/sec ±0.82% (94 runs sampled)
  lib-type-of x 4,170,273 ops/sec ±0.55% (94 runs sampled)
  lib-typeof x 9,686,935 ops/sec ±0.59% (98 runs sampled)

#2: boolean
  current x 27,197,115 ops/sec ±0.85% (94 runs sampled)
  lib-type-of x 3,145,791 ops/sec ±0.73% (97 runs sampled)
  lib-typeof x 9,199,562 ops/sec ±0.44% (99 runs sampled)

#3: date
  current x 20,190,117 ops/sec ±0.86% (92 runs sampled)
  lib-type-of x 5,166,970 ops/sec ±0.74% (94 runs sampled)
  lib-typeof x 9,610,821 ops/sec ±0.50% (96 runs sampled)

#4: function
  current x 23,855,460 ops/sec ±0.60% (97 runs sampled)
  lib-type-of x 5,667,740 ops/sec ±0.54% (100 runs sampled)
  lib-typeof x 10,010,644 ops/sec ±0.44% (100 runs sampled)

#5: null
  current x 27,061,047 ops/sec ±0.97% (96 runs sampled)
  lib-type-of x 13,965,573 ops/sec ±0.62% (97 runs sampled)
  lib-typeof x 8,460,194 ops/sec ±0.61% (97 runs sampled)

#6: number
  current x 25,075,682 ops/sec ±0.53% (99 runs sampled)
  lib-type-of x 2,266,405 ops/sec ±0.41% (98 runs sampled)
  lib-typeof x 9,821,481 ops/sec ±0.45% (99 runs sampled)

#7: object
  current x 3,348,980 ops/sec ±0.49% (99 runs sampled)
  lib-type-of x 3,245,138 ops/sec ±0.60% (94 runs sampled)
  lib-typeof x 9,262,952 ops/sec ±0.59% (99 runs sampled)

#8: regex
  current x 21,284,827 ops/sec ±0.72% (96 runs sampled)
  lib-type-of x 4,689,241 ops/sec ±0.43% (100 runs sampled)
  lib-typeof x 8,957,593 ops/sec ±0.62% (98 runs sampled)

#9: string
  current x 25,379,234 ops/sec ±0.58% (96 runs sampled)
  lib-type-of x 3,635,148 ops/sec ±0.76% (93 runs sampled)
  lib-typeof x 9,494,134 ops/sec ±0.49% (98 runs sampled)

#10: undef
  current x 27,459,221 ops/sec ±1.01% (93 runs sampled)
  lib-type-of x 14,360,433 ops/sec ±0.52% (99 runs sampled)
  lib-typeof x 23,202,868 ops/sec ±0.59% (94 runs sampled)

Optimizations

In 7 out of 8 cases, this library is 2x-10x faster than other top libraries included in the benchmarks. There are a few things that lead to this performance advantage, none of them hard and fast rules, but all of them simple and repeatable in almost any code library:

  1. Optimize around the fastest and most common use cases first. Of course, this will change from project-to-project, but I took some time to understand how and why typeof checks were being used in my own libraries and other libraries I use a lot.
  2. Optimize around bottlenecks - In other words, the order in which conditionals are implemented is significant, because each check is only as fast as the failing checks that came before it. Here, the biggest bottleneck by far is checking for plain objects (an object that was created by the Object constructor). I opted to make this check happen by process of elimination rather than brute force up front (e.g. by using something like val.constructor.name), so that every other type check would not be penalized it.
  3. Don’t do uneccessary processing - why do .slice(8, -1).toLowerCase(); just to get the word regex? It’s much faster to do if (type === '[object RegExp]') return 'regex'
  4. There is no reason to make the code in a microlib as terse as possible, just to win points for making it shorter. It’s always better to favor performant code over terse code. You will always only be using a single require() statement to use the library anyway, regardless of how the code is written.

Better type checking

kind-of is more correct than other type checking libs I’ve looked at. For example, here are some differing results from other popular libs:

typeof lib

Incorrectly tests instances of custom constructors (pretty common):

var typeOf = require('typeof');
function Test() {}
console.log(typeOf(new Test()));
//=> 'test'

Returns object instead of arguments:

function foo() {
  console.log(typeOf(arguments)) //=> 'object'
}
foo();

type-of lib

Incorrectly returns object for generator functions, buffers, Map, Set, WeakMap and WeakSet:

function * foo() {}
console.log(typeOf(foo));
//=> 'object'
console.log(typeOf(new Buffer('')));
//=> 'object'
console.log(typeOf(new Map()));
//=> 'object'
console.log(typeOf(new Set()));
//=> 'object'
console.log(typeOf(new WeakMap()));
//=> 'object'
console.log(typeOf(new WeakSet()));
//=> 'object'

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
82 jonschlinkert
3 aretecode
2 miguelmota
1 dtothefp
1 ksheedlo
1 pdehaan
1 laggingreflex
1 charlike

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on October 13, 2017. # kind-of NPM version NPM monthly downloads NPM total downloads Linux Build Status

Get the native type of a value.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

Install with bower

Why use this?

  1. it’s fast | optimizations
  2. better type checking

Usage

es5, browser and es6 ready

Release history

v4.0.0

Added

  • promise support

v5.0.0

Added

  • Set Iterator and Map Iterator support

Fixed

  • Now returns generatorfunction for generator functions

Benchmarks

Benchmarked against typeof and type-of. Note that performaces is slower for es6 features Map, WeakMap, Set and WeakSet.

#1: array
  current x 23,329,397 ops/sec ±0.82% (94 runs sampled)
  lib-type-of x 4,170,273 ops/sec ±0.55% (94 runs sampled)
  lib-typeof x 9,686,935 ops/sec ±0.59% (98 runs sampled)

#2: boolean
  current x 27,197,115 ops/sec ±0.85% (94 runs sampled)
  lib-type-of x 3,145,791 ops/sec ±0.73% (97 runs sampled)
  lib-typeof x 9,199,562 ops/sec ±0.44% (99 runs sampled)

#3: date
  current x 20,190,117 ops/sec ±0.86% (92 runs sampled)
  lib-type-of x 5,166,970 ops/sec ±0.74% (94 runs sampled)
  lib-typeof x 9,610,821 ops/sec ±0.50% (96 runs sampled)

#4: function
  current x 23,855,460 ops/sec ±0.60% (97 runs sampled)
  lib-type-of x 5,667,740 ops/sec ±0.54% (100 runs sampled)
  lib-typeof x 10,010,644 ops/sec ±0.44% (100 runs sampled)

#5: null
  current x 27,061,047 ops/sec ±0.97% (96 runs sampled)
  lib-type-of x 13,965,573 ops/sec ±0.62% (97 runs sampled)
  lib-typeof x 8,460,194 ops/sec ±0.61% (97 runs sampled)

#6: number
  current x 25,075,682 ops/sec ±0.53% (99 runs sampled)
  lib-type-of x 2,266,405 ops/sec ±0.41% (98 runs sampled)
  lib-typeof x 9,821,481 ops/sec ±0.45% (99 runs sampled)

#7: object
  current x 3,348,980 ops/sec ±0.49% (99 runs sampled)
  lib-type-of x 3,245,138 ops/sec ±0.60% (94 runs sampled)
  lib-typeof x 9,262,952 ops/sec ±0.59% (99 runs sampled)

#8: regex
  current x 21,284,827 ops/sec ±0.72% (96 runs sampled)
  lib-type-of x 4,689,241 ops/sec ±0.43% (100 runs sampled)
  lib-typeof x 8,957,593 ops/sec ±0.62% (98 runs sampled)

#9: string
  current x 25,379,234 ops/sec ±0.58% (96 runs sampled)
  lib-type-of x 3,635,148 ops/sec ±0.76% (93 runs sampled)
  lib-typeof x 9,494,134 ops/sec ±0.49% (98 runs sampled)

#10: undef
  current x 27,459,221 ops/sec ±1.01% (93 runs sampled)
  lib-type-of x 14,360,433 ops/sec ±0.52% (99 runs sampled)
  lib-typeof x 23,202,868 ops/sec ±0.59% (94 runs sampled)

Optimizations

In 7 out of 8 cases, this library is 2x-10x faster than other top libraries included in the benchmarks. There are a few things that lead to this performance advantage, none of them hard and fast rules, but all of them simple and repeatable in almost any code library:

  1. Optimize around the fastest and most common use cases first. Of course, this will change from project-to-project, but I took some time to understand how and why typeof checks were being used in my own libraries and other libraries I use a lot.
  2. Optimize around bottlenecks - In other words, the order in which conditionals are implemented is significant, because each check is only as fast as the failing checks that came before it. Here, the biggest bottleneck by far is checking for plain objects (an object that was created by the Object constructor). I opted to make this check happen by process of elimination rather than brute force up front (e.g. by using something like val.constructor.name), so that every other type check would not be penalized it.
  3. Don’t do uneccessary processing - why do .slice(8, -1).toLowerCase(); just to get the word regex? It’s much faster to do if (type === '[object RegExp]') return 'regex'
  4. There is no reason to make the code in a microlib as terse as possible, just to win points for making it shorter. It’s always better to favor performant code over terse code. You will always only be using a single require() statement to use the library anyway, regardless of how the code is written.

Better type checking

kind-of is more correct than other type checking libs I’ve looked at. For example, here are some differing results from other popular libs:

typeof lib

Incorrectly tests instances of custom constructors (pretty common):

Returns object instead of arguments:

type-of lib

Incorrectly returns object for generator functions, buffers, Map, Set, WeakMap and WeakSet:

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

Commits Contributor
82 jonschlinkert
3 aretecode
2 miguelmota
1 dtothefp
1 ksheedlo
1 pdehaan
1 laggingreflex
1 charlike

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on October 13, 2017. # type-check Build Status

For updates on type-check, follow me on twitter.

npm install type-check

Quick Examples

// Basic types:
var typeCheck = require('type-check').typeCheck;
typeCheck('Number', 1);               // true
typeCheck('Number', 'str');           // false
typeCheck('Error', new Error);        // true
typeCheck('Undefined', undefined);    // true

// Comment
typeCheck('count::Number', 1);        // true

// One type OR another type:
typeCheck('Number | String', 2);      // true
typeCheck('Number | String', 'str');  // true

// Wildcard, matches all types:
typeCheck('*', 2) // true

// Array, all elements of a single type:
typeCheck('[Number]', [1, 2, 3]);                // true
typeCheck('[Number]', [1, 'str', 3]);            // false

// Tuples, or fixed length arrays with elements of different types:
typeCheck('(String, Number)', ['str', 2]);       // true
typeCheck('(String, Number)', ['str']);          // false
typeCheck('(String, Number)', ['str', 2, 5]);    // false

// Object properties:
typeCheck('{x: Number, y: Boolean}', {x: 2, y: false});             // true
typeCheck('{x: Number, y: Boolean}',       {x: 2});                 // false
typeCheck('{x: Number, y: Maybe Boolean}', {x: 2});                 // true
typeCheck('{x: Number, y: Boolean}',      {x: 2, y: false, z: 3});  // false
typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3});  // true

// A particular type AND object properties:
typeCheck('RegExp{source: String, ...}', /re/i);          // true
typeCheck('RegExp{source: String, ...}', {source: 're'}); // false

// Custom types:
var opt = {customTypes:
  {Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}};
typeCheck('Even', 2, opt); // true

// Nested:
var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}'
typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // true

Check out the type syntax format and guide.

Usage

require('type-check'); returns an object that exposes four properties. VERSION is the current version of the library as a string. typeCheck, parseType, and parsedTypeCheck are functions.

typeCheck(type, input, options)

typeCheck checks a JavaScript value input against type written in the type format (and taking account the optional options) and returns whether the input matches the type.

arguments
  • type - String - the type written in the type format which to check against
  • input - * - any JavaScript value, which is to be checked against the type
  • options - Maybe Object - an optional parameter specifying additional options, currently the only available option is specifying custom types
returns

Boolean - whether the input matches the type

example

parseType(type)

parseType parses string type written in the type format into an object representing the parsed type.

arguments
  • type - String - the type written in the type format which to parse
returns

Object - an object in the parsed type format representing the parsed type

example

parsedTypeCheck(parsedType, input, options)

parsedTypeCheck checks a JavaScript value input against parsed type in the parsed type format (and taking account the optional options) and returns whether the input matches the type. Use this in conjunction with parseType if you are going to use a type more than once.

arguments
  • type - Object - the type in the parsed type format which to check against
  • input - * - any JavaScript value, which is to be checked against the type
  • options - Maybe Object - an optional parameter specifying additional options, currently the only available option is specifying custom types
returns

Boolean - whether the input matches the type

example

## Type Format

Syntax

White space is ignored. The root node is a Types.

  • Identifier = [\$\w]+ - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. String
  • Type = an Identifier, an Identifier followed by a Structure, just a Structure, or a wildcard * - eg. String, Object{x: Number}, {x: Number}, Array{0: String, 1: Boolean, length: Number}, *
  • Types = optionally a comment (an Identifier followed by a ::), optionally the identifier Maybe, one or more Type, separated by | - eg. Number, String | Date, Maybe Number, Maybe Boolean | String
  • Structure = Fields, or a Tuple, or an Array - eg. {x: Number}, (String, Number), [Date]
  • Fields = a {, followed one or more Field separated by a comma , (trailing comma , is permitted), optionally an ... (always preceded by a comma ,), followed by a } - eg. {x: Number, y: String}, {k: Function, ...}
  • Field = an Identifier, followed by a colon :, followed by Types - eg. x: Date | String, y: Boolean
  • Tuple = a (, followed by one or more Types separated by a comma , (trailing comma , is permitted), followed by a ) - eg (Date), (Number, Date)
  • Array = a [ followed by exactly one Types followed by a ] - eg. [Boolean], [Boolean | Null]

Guide

type-check uses Object.toString to find out the basic type of a value. Specifically,

A basic type, eg. Number, uses this check. This is much more versatile than using typeof - for example, with document, typeof produces 'object' which isn’t that useful, and our technique produces 'HTMLDocument'.

You may check for multiple types by separating types with a |. The checker proceeds from left to right, and passes if the value is any of the types - eg. String | Boolean first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false.

Adding a Maybe in front of a list of multiple types is the same as also checking for Null and Undefined - eg. Maybe String is equivalent to Undefined | Null | String.

You may add a comment to remind you of what the type is for by following an identifier with a :: before a type (or multiple types). The comment is simply thrown out.

The wildcard * matches all types.

There are three types of structures for checking the contents of a value: ‘fields’, ‘tuple’, and ‘array’.

If used by itself, a ‘fields’ structure will pass with any type of object as long as it is an instance of Object and the properties pass - this allows for duck typing - eg. {x: Boolean}.

To check if the properties pass, and the value is of a certain type, you can specify the type - eg. Error{message: String}.

If you want to make a field optional, you can simply use Maybe - eg. {x: Boolean, y: Maybe String} will still pass if y is undefined (or null).

If you don’t care if the value has properties beyond what you have specified, you can use the ‘etc’ operator ... - eg. {x: Boolean, ...} will match an object with an x property that is a boolean, and with zero or more other properties.

For an array, you must specify one or more types (separated by |) - it will pass for something of any length as long as each element passes the types provided - eg. [Number], [Number | String].

A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. (String, Number).

An array and tuple structure check that the value is of type Array by default, but if another type is specified, they will check for that instead - eg. Int32Array[Number]. You can use the wildcard * to search for any type at all.

Check out the type precedence library for type-check.

Options

Options is an object. It is an optional parameter to the typeCheck and parsedTypeCheck functions. The only current option is customTypes.

### Custom Types

Example:

customTypes allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a typeOf property - a string, and validate property - a function.

The typeOf property is the type the value should be (optional - if not set only validate will be used), and validate is a function which should return true if the value is of that type. validate receives one parameter, which is the value that we are checking.

Technical About

type-check is written in LiveScript - a language that compiles to JavaScript. It also uses the prelude.ls library.



Can I cache this? Build Status

CachePolicy tells when responses can be reused from a cache, taking into account HTTP RFC 7234 rules for user agents and shared caches. It also implements RFC 5861, implementing stale-if-error and stale-while-revalidate. It’s aware of many tricky details such as the Vary header, proxy revalidation, and authenticated responses.

Usage

Cacheability of an HTTP response depends on how it was requested, so both request and response are required to create the policy.

It may be surprising, but it’s not enough for an HTTP response to be fresh to satisfy a request. It may need to match request headers specified in Vary. Even a matching fresh response may still not be usable if the new request restricted cacheability, etc.

The key method is satisfiesWithoutRevalidation(newRequest), which checks whether the newRequest is compatible with the original request and whether all caching conditions are met.

Constructor options

Request and response must have a headers property with all header names in lower case. url, status and method are optional (defaults are any URL, status 200, and GET method).

If options.shared is true (default), then the response is evaluated from a perspective of a shared cache (i.e. private is not cacheable and s-maxage is respected). If options.shared is false, then the response is evaluated from a perspective of a single-user cache (i.e. private is cacheable and s-maxage is ignored). shared: true is recommended for HTTP clients.

options.cacheHeuristic is a fraction of response’s age that is used as a fallback cache duration. The default is 0.1 (10%), e.g. if a file hasn’t been modified for 100 days, it’ll be cached for 100*0.1 = 10 days.

options.immutableMinTimeToLive is a number of milliseconds to assume as the default time to cache responses with Cache-Control: immutable. Note that per RFC these can become stale, so max-age still overrides the default.

If options.ignoreCargoCult is true, common anti-cache directives will be completely ignored if the non-standard pre-check and post-check directives are present. These two useless directives are most commonly found in bad StackOverflow answers and PHP’s “session limiter” defaults.

storable()

Returns true if the response can be stored in a cache. If it’s false then you MUST NOT store either the request or the response.

satisfiesWithoutRevalidation(newRequest)

This is the most important method. Use this method to check whether the cached response is still fresh in the context of the new request.

If it returns true, then the given request matches the original response this cache policy has been created with, and the response can be reused without contacting the server. Note that the old response can’t be returned without being updated, see responseHeaders().

If it returns false, then the response may not be matching at all (e.g. it’s for a different URL or method), or may require to be refreshed first (see revalidationHeaders()).

responseHeaders()

Returns updated, filtered set of response headers to return to clients receiving the cached response. This function is necessary, because proxies MUST always remove hop-by-hop headers (such as TE and Connection) and update response’s Age to avoid doubling cache time.

timeToLive()

Returns approximate time in milliseconds until the response becomes stale (i.e. not fresh).

After that time (when timeToLive() <= 0) the response might not be usable without revalidation. However, there are exceptions, e.g. a client can explicitly allow stale responses, so always check with satisfiesWithoutRevalidation(). stale-if-error and stale-while-revalidate extend the time to live of the cache, that can still be used if stale.

toObject()/fromObject(json)

Chances are you’ll want to store the CachePolicy object along with the cached response. obj = policy.toObject() gives a plain JSON-serializable object. policy = CachePolicy.fromObject(obj) creates an instance from it.

Refreshing stale cache (revalidation)

When a cached response has expired, it can be made fresh again by making a request to the origin server. The server may respond with status 304 (Not Modified) without sending the response body again, saving bandwidth.

The following methods help perform the update efficiently and correctly.

revalidationHeaders(newRequest)

Returns updated, filtered set of request headers to send to the origin server to check if the cached response can be reused. These headers allow the origin server to return status 304 indicating the response is still fresh. All headers unrelated to caching are passed through as-is.

Use this method when updating cache from the origin server.

revalidatedPolicy(revalidationRequest, revalidationResponse)

Use this method to update the cache after receiving a new response from the origin server. It returns an object with two keys:

  • policy — A new CachePolicy with HTTP headers updated from revalidationResponse. You can always replace the old cached CachePolicy with the new one.
  • modified — Boolean indicating whether the response body has changed.
    • If false, then a valid 304 Not Modified response has been received, and you can reuse the old cached response body. This is also affected by stale-if-error.
    • If true, you should use new response’s body (if present), or make another request to the origin server without any conditional headers (i.e. don’t use revalidationHeaders() this time) to get the new resource.


Yo, FRESH

satisfiesWithoutRevalidation
satisfiesWithoutRevalidation

Used by

Implemented

  • Cache-Control response header with all the quirks.
  • Expires with check for bad clocks.
  • Pragma response header.
  • Age response header.
  • Vary response header.
  • Default cacheability of statuses and methods.
  • Requests for stale data.
  • Filtering of hop-by-hop headers.
  • Basic revalidation request
  • stale-if-error

Unimplemented

  • Merging of range requests, If-Range (but correctly supports them as non-cacheable)
  • Revalidation of multiple representations

Trusting server Date

Per the RFC, the cache should take into account the time between server-supplied Date and the time it received the response. The RFC-mandated behavior creates two problems:

  • Servers with incorrectly set timezone may add several hours to cache age (or more, if the clock is completely wrong).
  • Even reasonably correct clocks may be off by a couple of seconds, breaking max-age=1 trick (which is useful for reverse proxies on high-traffic servers).

Previous versions of this library had an option to ignore the server date if it was “too inaccurate”. To support the max-age=1 trick the library also has to ignore dates that pretty accurate. There’s no point of having an option to trust dates that are only a bit inaccurate, so this library won’t trust any server dates. max-age will be interpreted from the time the response has been received, not from when it has been sent. This will affect only RFC 1149 networks.



Acorn

A tiny, fast JavaScript parser written in JavaScript.

Community

You are welcome to report bugs or create pull requests on github. For questions and discussion, please use the Tern discussion forum.

Installation

The easiest way to install acorn is from npm:

Alternately, you can download the source and build acorn yourself:

Interface

parse(input, options) is the main interface to the library. The input parameter is a string, options can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the ESTree spec.

When encountering a syntax error, the parser will raise a SyntaxError object with a meaningful message. The error object will have a pos property that indicates the string offset at which the error occurred, and a loc object that contains a {line, column} object referring to that same position.

Options can be provided by passing a second argument, which should be an object containing any of these fields:

  • ecmaVersion: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10.

    NOTE: Only ‘stage 4’ (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins.

  • sourceType: Indicate the mode the code should be parsed in. Can be either "script" or "module". This influences global strict mode and parsing of import and export declarations.

    NOTE: If set to "module", then static import / export syntax will be valid, even if ecmaVersion is less than 6.

  • onInsertedSemicolon: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if locations is on, also a {line, column} object representing this position.

  • onTrailingComma: Like onInsertedSemicolon, but for trailing commas.

  • allowReserved: If false, using a reserved word will generate an error. Defaults to true for ecmaVersion 3, false for higher versions. When given the value "never", reserved words and keywords can also not be used as property names (as in Internet Explorer’s old parser).

  • allowReturnOutsideFunction: By default, a return statement at the top level raises an error. Set this to true to accept such code.

  • allowImportExportEverywhere: By default, import and export declarations can only appear at a program’s top level. Setting this option to true allows them anywhere where a statement is allowed.

  • allowAwaitOutsideFunction: By default, await expressions can only appear inside async functions. Setting this option to true allows to have top-level await expressions. They are still not allowed in non-async functions, though.

  • allowHashBang: When this is enabled (off by default), if the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.

  • locations: When true, each node has a loc object attached with start and end subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form. Default is false.

  • onToken: If a function is passed for this option, each found token will be passed in same format as tokens returned from tokenizer().getToken().

    If array is passed, each found token is pushed to it.

    Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.

  • onComment: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters:

    • block: true if the comment is a block comment, false if it is a line comment.
    • text: The content of the comment.
    • start: Character offset of the start of the comment.
    • end: Character offset of the end of the comment.

    When the locations options is on, the {line, column} locations of the comment’s start and end are passed as two additional parameters.

    If array is passed for this option, each found comment is pushed to it as object in Esprima format:

    Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.

  • ranges: Nodes have their start and end characters offsets recorded in start and end properties (directly on the node, rather than the loc object, which holds line/column data. To also add a semi-standardized range property holding a [start, end] array with the same numbers, set the ranges option to true.

  • program: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the program option in subsequent parses. This will add the toplevel forms of the parsed file to the “Program” (top) node of an existing parse tree.

  • sourceFile: When the locations option is true, you can pass this option to add a source attribute in every node’s loc object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose.

  • directSourceFile: Like sourceFile, but a sourceFile property will be added (regardless of the location option) directly to the nodes, rather than the loc object.

  • preserveParens: If this option is true, parenthesized expressions are represented by (non-standard) ParenthesizedExpression nodes that have a single expression property containing the expression inside parentheses.

parseExpressionAt(input, offset, options) will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression.

tokenizer(input, options) returns an object with a getToken method that can be called repeatedly to get the next token, a {start, end, type, value} object (with added loc property when the locations option is enabled and range property when the ranges option is enabled). When the token’s type is tokTypes.eof, you should stop calling the method, since it will keep returning that same token forever.

In ES6 environment, returned result can be used as any other protocol-compliant iterable:

tokTypes holds an object mapping names to the token type objects that end up in the type properties of tokens.

getLineInfo(input, offset) can be used to get a {line, column} object for a given program string and offset.

The Parser class

Instances of the Parser class contain all the state and logic that drives a parse. It has static methods parse, parseExpressionAt, and tokenizer that match the top-level functions by the same name.

When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static extend method.

The extend method takes any number of plugin values, and returns a new Parser class that includes the extra parser logic provided by the plugins.

Command line interface

The bin/acorn utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options:

  • --ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10: Sets the ECMAScript version to parse. Default is version 9.

  • --module: Sets the parsing mode to "module". Is set to "script" otherwise.

  • --locations: Attaches a “loc” object to each node with “start” and “end” subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form.

  • --allow-hash-bang: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.

  • --compact: No whitespace is used in the AST output.

  • --silent: Do not output the AST, just return the exit status.

  • --help: Print the usage information and quit.

The utility spits out the syntax tree as JSON data.

Existing plugins

Plugins for ECMAScript proposals:



TypeScript ESLint Parser

An ESLint parser which leverages TypeScript ESTree to allow for ESLint to lint TypeScript source code.

CI NPM Version NPM Downloads

Getting Started

You can find our Getting Started docs here

These docs walk you through setting up ESLint, this parser, and our plugin. If you know what you’re doing and just want to quick start, read on…

Quick-start

Installation

Usage

In your ESLint configuration file, set the parser property:

There is sometimes an incorrect assumption that the parser itself is what does everything necessary to facilitate the use of ESLint with TypeScript. In actuality, it is the combination of the parser and one or more plugins which allow you to maximize your usage of ESLint with TypeScript.

For example, once this parser successfully produces an AST for the TypeScript source code, it might well contain some information which simply does not exist in a standard JavaScript context, such as the data for a TypeScript-specific construct, like an interface.

The core rules built into ESLint, such as indent have no knowledge of such constructs, so it is impossible to expect them to work out of the box with them.

Instead, you also need to make use of one more plugins which will add or extend rules with TypeScript-specific features.

By far the most common case will be installing the @typescript-eslint/eslint-plugin plugin, but there are also other relevant options available such a @typescript-eslint/eslint-plugin-tslint.

Configuration

The following additional configuration options are available by specifying them in parserOptions in your ESLint configuration file.

parserOptions.ecmaFeatures.jsx

Default false.

Enable parsing JSX when true. More details can be found here.

NOTE: this setting does not affect known file types (.js, .jsx, .ts, .tsx, .json) because the TypeScript compiler has its own internal handling for known file extensions. The exact behavior is as follows:

  • if parserOptions.project is not provided:
    • .js, .jsx, .tsx files are parsed as if this is true.
    • .ts files are parsed as if this is false.
    • unknown extensions (.md, .vue) will respect this setting.
  • if parserOptions.project is provided (i.e. you are using rules with type information):
    • .js, .jsx, .tsx files are parsed as if this is true.
    • .ts files are parsed as if this is false.
    • “unknown” extensions (.md, .vue) are parsed as if this is false.

parserOptions.ecmaFeatures.globalReturn

Default false.

This options allows you to tell the parser if you want to allow global return statements in your codebase.

parserOptions.ecmaVersion

Default 2018.

Accepts any valid ECMAScript version number:

  • A version: es3, es5, es6, es7, es8, es9, es10, es11, …, or
  • A year: es2015, es2016, es2017, es2018, es2019, es2020, …

Specifies the version of ECMAScript syntax you want to use. This is used by the parser to determine how to perform scope analysis, and it affects the default

parserOptions.jsxPragma

Default 'React'

The identifier that’s used for JSX Elements creation (after transpilation). If you’re using a library other than React (like preact), then you should change this value.

This should not be a member expression - just the root identifier (i.e. use "React" instead of "React.createElement").

If you provide parserOptions.project, you do not need to set this, as it will automatically detected from the compiler.

parserOptions.jsxFragmentName

Default null

The identifier that’s used for JSX fragment elements (after transpilation). If null, assumes transpilation will always use a member of the configured jsxPragma. This should not be a member expression - just the root identifier (i.e. use "h" instead of "h.Fragment").

If you provide parserOptions.project, you do not need to set this, as it will automatically detected from the compiler.

parserOptions.lib

Default ['es2018']

For valid options, see the TypeScript compiler options.

Specifies the TypeScript libs that are available. This is used by the scope analyser to ensure there are global variables declared for the types exposed by TypeScript.

If you provide parserOptions.project, you do not need to set this, as it will automatically detected from the compiler.

parserOptions.project

Default undefined.

This option allows you to provide a path to your project’s tsconfig.json. This setting is required if you want to use rules which require type information. Relative paths are interpreted relative to the current working directory if tsconfigRootDir is not set. If you intend on running ESLint from directories other than the project root, you should consider using tsconfigRootDir.

  • Accepted values:

  • If you use project references, TypeScript will not automatically use project references to resolve files. This means that you will have to add each referenced tsconfig to the project field either separately, or via a glob.

  • TypeScript will ignore files with duplicate filenames in the same folder (for example, src/file.ts and src/file.js). TypeScript purposely ignore all but one of the files, only keeping the one file with the highest priority extension (the extension priority order (from highest to lowest) is .ts, .tsx, .d.ts, .js, .jsx). For more info see #955.

  • Note that if this setting is specified and createDefaultProgram is not, you must only lint files that are included in the projects as defined by the provided tsconfig.json files. If your existing configuration does not include all of the files you would like to lint, you can create a separate tsconfig.eslint.json as follows:

    {
      // extend your base config so you don't have to redefine your compilerOptions
      "extends": "./tsconfig.json",
      "include": [
        "src/**/*.ts",
        "test/**/*.ts",
        "typings/**/*.ts",
        // etc
    
        // if you have a mixed JS/TS codebase, don't forget to include your JS files
        "src/**/*.js"
      ]
    }

parserOptions.tsconfigRootDir

Default undefined.

This option allows you to provide the root directory for relative tsconfig paths specified in the project option above.

parserOptions.projectFolderIgnoreList

Default ["**/node_modules/**"].

This option allows you to ignore folders from being included in your provided list of projects. This is useful if you have configured glob patterns, but want to make sure you ignore certain folders.

It accepts an array of globs to exclude from the project globs.

For example, by default it will ensure that a glob like ./**/tsconfig.json will not match any tsconfigs within your node_modules folder (some npm packages do not exclude their source files from their published packages).

parserOptions.extraFileExtensions

Default undefined.

This option allows you to provide one or more additional file extensions which should be considered in the TypeScript Program compilation. The default extensions are .ts, .tsx, .js, and .jsx. Add extensions starting with ., followed by the file extension. E.g. for a .vue file use "extraFileExtensions: [".vue"].

parserOptions.warnOnUnsupportedTypeScriptVersion

Default true.

This option allows you to toggle the warning that the parser will give you if you use a version of TypeScript which is not explicitly supported

parserOptions.createDefaultProgram

Default false.

This option allows you to request that when the project setting is specified, files will be allowed when not included in the projects defined by the provided tsconfig.json files. Using this option will incur significant performance costs. This option is primarily included for backwards-compatibility. See the project section above for more information.

Please see typescript-eslint for the supported TypeScript version.

Please ensure that you are using a supported version before submitting any issues/bug reports.

Reporting Issues

Please use the @typescript-eslint/parser issue template when creating your issue and fill out the information requested as best you can. This will really help us when looking into your issue.

Contributing

See the contributing guide here



levn Build Status

Light ECMAScript (JavaScript) Value Notation Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments).

How is this different than JSON? levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose.

npm install levn

For updates on levn, follow me on twitter.

Quick Examples

Usage

require('levn'); returns an object that exposes three properties. VERSION is the current version of the library as a string. parse and parsedTypeParse are functions.

parse(type, input, options)

parse casts the string input into a JavaScript value according to the specified type in the type format (and taking account the optional options) and returns the resulting JavaScript value.

arguments
  • type - String - the type written in the type format which to check against
  • input - String - the value written in the levn format
  • options - Maybe Object - an optional parameter specifying additional options
returns

* - the resulting JavaScript value

example

parsedTypeParse(parsedType, input, options)

parsedTypeParse casts the string input into a JavaScript value according to the specified type which has already been parsed (and taking account the optional options) and returns the resulting JavaScript value. You can parse a type using the type-check library’s parseType function.

arguments
  • type - Object - the type in the parsed type format which to check against
  • input - String - the value written in the levn format
  • options - Maybe Object - an optional parameter specifying additional options
returns

* - the resulting JavaScript value

example

Levn Format

Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, parse('Number', '2') will produce the number 2, but parse('String', '2') will produce the string "2".

If you do not provide type information, and simply use *, levn will parse the input according the unambiguous “explicit” mode, which we will now detail - you can also set the explicit option to true manually in the options.

  • "string", 'string' are parsed as a String, eg. "a msg" is "a msg"
  • #date# is parsed as a Date, eg. #2011-11-11# is new Date('2011-11-11')
  • /regexp/flags is parsed as a RegExp, eg. /re/gi is /re/gi
  • undefined, null, NaN, true, and false are all their JavaScript equivalents
  • [element1, element2, etc] is an Array, and the casting procedure is recursively applied to each element. Eg. [1,2,3] is [1,2,3].
  • (element1, element2, etc) is an tuple, and the casting procedure is recursively applied to each element. Eg. (1, a) is (1, a) (is [1, 'a']).
  • {key1: val1, key2: val2, ...} is an Object, and the casting procedure is recursively applied to each property. Eg. {a: 1, b: 2} is {a: 1, b: 2}.
  • Any test which does not fall under the above, and which does not contain special characters ([``]``(``)``{``}``:``,) is a string, eg. $12- blah is "$12- blah".

If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the type format section of type-check for more information about how to specify types. There are some rules about what levn can do with the information:

  • If a String is expected, and only a String, all characters of the input (including any special ones) will become part of the output. Eg. [({})] is "[({})]", and "hi" is '"hi"'.
  • If a Date is expected, the surrounding # can be omitted from date literals. Eg. 2011-11-11 is new Date('2011-11-11').
  • If a RegExp is expected, no flags need to be specified, and the regex is not using any of the special characters,the opening and closing / can be omitted - this will have the affect of setting the source of the regex to the input. Eg. regex is /regex/.
  • If an Array is expected, and it is the root node (at the top level), the opening [ and closing ] can be omitted. Eg. 1,2,3 is [1,2,3].
  • If a tuple is expected, and it is the root node (at the top level), the opening ( and closing ) can be omitted. Eg. 1, a is (1, a) (is [1, 'a']).
  • If an Object is expected, and it is the root node (at the top level), the opening { and closing } can be omitted. Eg a: 1, b: 2 is {a: 1, b: 2}.

If you list multiple types (eg. Number | String), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list.

Whitespace between special characters and elements is inconsequential.

Options

Options is an object. It is an optional parameter to the parse and parsedTypeParse functions.

Explicit

A Boolean. By default it is false.

Example:

explicit sets whether to be in explicit mode or not. Using * automatically activates explicit mode. For more information, read the levn format section.

customTypes

An Object. Empty {} by default.

Example:

Another Example:

customTypes is an object whose keys are the name of the types, and whose values are an object with three properties, typeOf, validate, and cast. For more information about typeOf and validate, please see the custom types section of type-check.

cast is a function which receives three arguments, the value under question, options, and the typesCast function. In cast, attempt to cast the value into the specified type. If you are successful, return an object in the format {type: 'Just', value: CAST-VALUE}, if you know it won’t work, return {type: 'Nothing'}. You can use the typesCast function to cast any child values. Remember to pass options to it. In your function you can also check for options.explicit and act accordingly.

Technical About

levn is written in LiveScript - a language that compiles to JavaScript. It uses type-check to both parse types and validate values. It also uses the prelude.ls library.



Acorn

A tiny, fast JavaScript parser written in JavaScript.

Community

You are welcome to report bugs or create pull requests on github. For questions and discussion, please use the Tern discussion forum.

Installation

The easiest way to install acorn is from npm:

Alternately, you can download the source and build acorn yourself:

Interface

parse(input, options) is the main interface to the library. The input parameter is a string, options must be an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the ESTree spec.

When encountering a syntax error, the parser will raise a SyntaxError object with a meaningful message. The error object will have a pos property that indicates the string offset at which the error occurred, and a loc object that contains a {line, column} object referring to that same position.

Options are provided by in a second argument, which should be an object containing any of these fields (only ecmaVersion is required):

  • ecmaVersion: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (or 2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019), 11 (2020), or 12 (2021, partial support), or "latest" (the latest the library supports). This influences support for strict mode, the set of reserved words, and support for new syntax features.

    NOTE: Only ‘stage 4’ (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features must be implemented through plugins.

  • sourceType: Indicate the mode the code should be parsed in. Can be either "script" or "module". This influences global strict mode and parsing of import and export declarations.

    NOTE: If set to "module", then static import / export syntax will be valid, even if ecmaVersion is less than 6.

  • onInsertedSemicolon: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if locations is on, also a {line, column} object representing this position.

  • onTrailingComma: Like onInsertedSemicolon, but for trailing commas.

  • allowReserved: If false, using a reserved word will generate an error. Defaults to true for ecmaVersion 3, false for higher versions. When given the value "never", reserved words and keywords can also not be used as property names (as in Internet Explorer’s old parser).

  • allowReturnOutsideFunction: By default, a return statement at the top level raises an error. Set this to true to accept such code.

  • allowImportExportEverywhere: By default, import and export declarations can only appear at a program’s top level. Setting this option to true allows them anywhere where a statement is allowed.

  • allowAwaitOutsideFunction: By default, await expressions can only appear inside async functions. Setting this option to true allows to have top-level await expressions. They are still not allowed in non-async functions, though.

  • allowHashBang: When this is enabled (off by default), if the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.

  • locations: When true, each node has a loc object attached with start and end subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form. Default is false.

  • onToken: If a function is passed for this option, each found token will be passed in same format as tokens returned from tokenizer().getToken().

    If array is passed, each found token is pushed to it.

    Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.

  • onComment: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters:

    • block: true if the comment is a block comment, false if it is a line comment.
    • text: The content of the comment.
    • start: Character offset of the start of the comment.
    • end: Character offset of the end of the comment.

    When the locations options is on, the {line, column} locations of the comment’s start and end are passed as two additional parameters.

    If array is passed for this option, each found comment is pushed to it as object in Esprima format:

    Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.

  • ranges: Nodes have their start and end characters offsets recorded in start and end properties (directly on the node, rather than the loc object, which holds line/column data. To also add a semi-standardized range property holding a [start, end] array with the same numbers, set the ranges option to true.

  • program: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the program option in subsequent parses. This will add the toplevel forms of the parsed file to the “Program” (top) node of an existing parse tree.

  • sourceFile: When the locations option is true, you can pass this option to add a source attribute in every node’s loc object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose.

  • directSourceFile: Like sourceFile, but a sourceFile property will be added (regardless of the location option) directly to the nodes, rather than the loc object.

  • preserveParens: If this option is true, parenthesized expressions are represented by (non-standard) ParenthesizedExpression nodes that have a single expression property containing the expression inside parentheses.

parseExpressionAt(input, offset, options) will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression.

tokenizer(input, options) returns an object with a getToken method that can be called repeatedly to get the next token, a {start, end, type, value} object (with added loc property when the locations option is enabled and range property when the ranges option is enabled). When the token’s type is tokTypes.eof, you should stop calling the method, since it will keep returning that same token forever.

In ES6 environment, returned result can be used as any other protocol-compliant iterable:

tokTypes holds an object mapping names to the token type objects that end up in the type properties of tokens.

getLineInfo(input, offset) can be used to get a {line, column} object for a given program string and offset.

The Parser class

Instances of the Parser class contain all the state and logic that drives a parse. It has static methods parse, parseExpressionAt, and tokenizer that match the top-level functions by the same name.

When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static extend method.

The extend method takes any number of plugin values, and returns a new Parser class that includes the extra parser logic provided by the plugins.

Command line interface

The bin/acorn utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options:

  • --ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10: Sets the ECMAScript version to parse. Default is version 9.

  • --module: Sets the parsing mode to "module". Is set to "script" otherwise.

  • --locations: Attaches a “loc” object to each node with “start” and “end” subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form.

  • --allow-hash-bang: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.

  • --compact: No whitespace is used in the AST output.

  • --silent: Do not output the AST, just return the exit status.

  • --help: Print the usage information and quit.

The utility spits out the syntax tree as JSON data.

Existing plugins

Plugins for ECMAScript proposals:

bignumber.js
bignumber.js

A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.

npm version build status


Features

  • Integers and decimals
  • Simple API but full-featured
  • Faster, smaller, and perhaps easier to use than JavaScript versions of Java’s BigDecimal
  • 8 KB minified and gzipped
  • Replicates the toExponential, toFixed, toPrecision and toString methods of JavaScript’s Number type
  • Includes a toFraction and a correctly-rounded squareRoot method
  • No dependencies
  • Wide platform compatibility: uses JavaScript 1.5 (ECMAScript 3) features only
  • Comprehensive documentation and test set
API
API

If a smaller and simpler library is required see big.js. It’s less than half the size but only works with decimal numbers and only has half the methods. It also does not allow NaN or Infinity, or have the configuration options of this library.

See also decimal.js, which among other things adds support for non-integer powers, and performs all operations to a specified number of significant digits.

Load

The library is the single JavaScript file bignumber.js or ES module bignumber.mjs.

Browser:

ES module

Node.js:

ES module

Use

The library exports a single constructor function, BigNumber, which accepts a value of type Number, String or BigNumber,

To get the string value of a BigNumber use toString() or toFixed(). Using toFixed() prevents exponential notation being returned, no matter how large or small the value.

If the limited precision of Number values is not well understood, it is recommended to create BigNumbers from String values rather than Number values to avoid a potential loss of precision.

In all further examples below, let, semicolons and toString calls are not shown. If a commented-out value is in quotes it means toString has been called on the preceding expression.

When creating a BigNumber from a Number, note that a BigNumber is created from a Number’s decimal toString() value not from its underlying binary value. If the latter is required, then pass the Number’s toString(2) value and specify base 2.

BigNumbers can be created from values in bases from 2 to 36. See ALPHABET to extend this range.

Performance is better if base 10 is NOT specified for decimal values. Only specify base 10 when it is desired that the number of decimal places of the input value be limited to the current DECIMAL_PLACES setting.

A BigNumber is immutable in the sense that it is not changed by its methods.

The methods that return a BigNumber can be chained.

Some of the longer method names have a shorter alias.

As with JavaScript’s Number type, there are toExponential, toFixed and toPrecision methods.

A base can be specified for toString.

Performance is better if base 10 is NOT specified, i.e. use toString() not toString(10). Only specify base 10 when it is desired that the number of decimal places be limited to the current DECIMAL_PLACES setting.

There is a toFormat method which may be useful for internationalisation.

The maximum number of decimal places of the result of an operation involving division (i.e. a division, square root, base conversion or negative power operation) is set using the set or config method of the BigNumber constructor.

The other arithmetic operations always give the exact result.

There is a toFraction method with an optional maximum denominator argument

and isNaN and isFinite methods, as NaN and Infinity are valid BigNumber values.

The value of a BigNumber is stored in a decimal floating point format in terms of a coefficient, exponent and sign.

For advanced usage, multiple BigNumber constructors can be created, each with their own independent configuration.

To avoid having to call toString or valueOf on a BigNumber to get its value in the Node.js REPL or when using console.log use

For further information see the API reference in the doc directory.

Test

"

The test/modules directory contains the test scripts for each method.

The tests can be run with Node.js or a browser. For Node.js use

npm test

or

$ node test/test

To test a single method, use, for example

$ node test/methods/toFraction

For the browser, open test/test.html.

Build

For Node, if uglify-js is installed

npm install uglify-js -g

then

npm run build

will create bignumber.min.js.

A source map will also be created in the root directory.

Licence

See LICENCE.



jsprim: utilities for primitive JavaScript types

This module provides miscellaneous facilities for working with strings, numbers, dates, and objects and arrays of these basic types.

deepCopy(obj)

Creates a deep copy of a primitive type, object, or array of primitive types.

deepEqual(obj1, obj2)

Returns whether two objects are equal.

isEmpty(obj)

Returns true if the given object has no properties and false otherwise. This is O(1) (unlike Object.keys(obj).length === 0, which is O(N)).

hasKey(obj, key)

Returns true if the given object has an enumerable, non-inherited property called key. For information on enumerability and ownership of properties, see the MDN documentation.

forEachKey(obj, callback)

Like Array.forEach, but iterates enumerable, owned properties of an object rather than elements of an array. Equivalent to:

for (var key in obj) {
        if (Object.prototype.hasOwnProperty.call(obj, key)) {
                callback(key, obj[key]);
        }
}

flattenObject(obj, depth)

Flattens an object up to a given level of nesting, returning an array of arrays of length “depth + 1”, where the first “depth” elements correspond to flattened columns and the last element contains the remaining object . For example:

flattenObject({
    'I': {
        'A': {
            'i': {
                'datum1': [ 1, 2 ],
                'datum2': [ 3, 4 ]
            },
            'ii': {
                'datum1': [ 3, 4 ]
            }
        },
        'B': {
            'i': {
                'datum1': [ 5, 6 ]
            },
            'ii': {
                'datum1': [ 7, 8 ],
                'datum2': [ 3, 4 ],
            },
            'iii': {
            }
        }
    },
    'II': {
        'A': {
            'i': {
                'datum1': [ 1, 2 ],
                'datum2': [ 3, 4 ]
            }
        }
    }
}, 3)

becomes:

[
    [ 'I',  'A', 'i',   { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } ],
    [ 'I',  'A', 'ii',  { 'datum1': [ 3, 4 ] } ],
    [ 'I',  'B', 'i',   { 'datum1': [ 5, 6 ] } ],
    [ 'I',  'B', 'ii',  { 'datum1': [ 7, 8 ], 'datum2': [ 3, 4 ] } ],
    [ 'I',  'B', 'iii', {} ],
    [ 'II', 'A', 'i',   { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } ]
]

This function is strict: “depth” must be a non-negative integer and “obj” must be a non-null object with at least “depth” levels of nesting under all keys.

flattenIter(obj, depth, func)

This is similar to flattenObject except that instead of returning an array, this function invokes func(entry) for each entry in the array that flattenObject would return. flattenIter(obj, depth, func) is logically equivalent to flattenObject(obj, depth).forEach(func). Importantly, this version never constructs the full array. Its memory usage is O(depth) rather than O(n) (where n is the number of flattened elements).

There’s another difference between flattenObject and flattenIter that’s related to the special case where depth === 0. In this case, flattenObject omits the array wrapping obj (which is regrettable).

pluck(obj, key)

Fetch nested property “key” from object “obj”, traversing objects as needed. For example, pluck(obj, "foo.bar.baz") is roughly equivalent to obj.foo.bar.baz, except that:

  1. If traversal fails, the resulting value is undefined, and no error is thrown. For example, pluck({}, "foo.bar") is just undefined.
  2. If “obj” has property “key” directly (without traversing), the corresponding property is returned. For example, pluck({ 'foo.bar': 1 }, 'foo.bar') is 1, not undefined. This is also true recursively, so pluck({ 'a': { 'foo.bar': 1 } }, 'a.foo.bar') is also 1, not undefined.

randElt(array)

Returns an element from “array” selected uniformly at random. If “array” is empty, throws an Error.

startsWith(str, prefix)

Returns true if the given string starts with the given prefix and false otherwise.

endsWith(str, suffix)

Returns true if the given string ends with the given suffix and false otherwise.

parseInteger(str, options)

Parses the contents of str (a string) as an integer. On success, the integer value is returned (as a number). On failure, an error is returned describing why parsing failed.

By default, leading and trailing whitespace characters are not allowed, nor are trailing characters that are not part of the numeric representation. This behaviour can be toggled by using the options below. The empty string ('') is not considered valid input. If the return value cannot be precisely represented as a number (i.e., is smaller than Number.MIN_SAFE_INTEGER or larger than Number.MAX_SAFE_INTEGER), an error is returned. Additionally, the string '-0' will be parsed as the integer 0, instead of as the IEEE floating point value -0.

This function accepts both upper and lowercase characters for digits, similar to parseInt(), Number(), and strtol(3C).

The following may be specified in options:

Option Type Default Meaning
base number 10 numeric base (radix) to use, in the range 2 to 36
allowSign boolean true whether to interpret any leading + (positive) and - (negative) characters
allowImprecise boolean false whether to accept values that may have lost precision (past MAX_SAFE_INTEGER or below MIN_SAFE_INTEGER)
allowPrefix boolean false whether to interpret the prefixes 0b (base 2), 0o (base 8), 0t (base 10), or 0x (base 16)
allowTrailing boolean false whether to ignore trailing characters
trimWhitespace boolean false whether to trim any leading or trailing whitespace/line terminators
leadingZeroIsOctal boolean false whether a leading zero indicates octal

Note that if base is unspecified, and allowPrefix or leadingZeroIsOctal are, then the leading characters can change the default base from 10. If base is explicitly specified and allowPrefix is true, then the prefix will only be accepted if it matches the specified base. base and leadingZeroIsOctal cannot be used together.

Context: It’s tricky to parse integers with JavaScript’s built-in facilities for several reasons:

  • parseInt() and Number() by default allow the base to be specified in the input string by a prefix (e.g., 0x for hex).
  • parseInt() allows trailing nonnumeric characters.
  • Number(str) returns 0 when str is the empty string ('').
  • Both functions return incorrect values when the input string represents a valid integer outside the range of integers that can be represented precisely. Specifically, parseInt('9007199254740993') returns 9007199254740992.
  • Both functions always accept - and + signs before the digit.
  • Some older JavaScript engines always interpret a leading 0 as indicating octal, which can be surprising when parsing input from users who expect a leading zero to be insignificant.

While each of these may be desirable in some contexts, there are also times when none of them are wanted. parseInteger() grants greater control over what input’s permissible.

iso8601(date)

Converts a Date object to an ISO8601 date string of the form “YYYY-MM-DDTHH:MM:SS.sssZ”. This format is not customizable.

parseDateTime(str)

Parses a date expressed as a string, as either a number of milliseconds since the epoch or any string format that Date accepts, giving preference to the former where these two sets overlap (e.g., strings containing small numbers).

hrtimeDiff(timeA, timeB)

hrtimeAdd(timeA, timeB)

Add two hrtime intervals (as from Node’s process.hrtime()), returning a new hrtime interval array. This function does not modify either input argument.

hrtimeAccum(timeA, timeB)

Add two hrtime intervals (as from Node’s process.hrtime()), storing the result in timeA. This function overwrites (and returns) the first argument passed in.

hrtimeNanosec(timeA), hrtimeMicrosec(timeA), hrtimeMillisec(timeA)

This suite of functions converts a hrtime interval (as from Node’s process.hrtime()) into a scalar number of nanoseconds, microseconds or milliseconds. Results are truncated, as with Math.floor().

validateJsonObject(schema, object)

Uses JSON validation (via JSV) to validate the given object against the given schema. On success, returns null. On failure, returns (does not throw) a useful Error object.

extraProperties(object, allowed)

Check an object for unexpected properties. Accepts the object to check, and an array of allowed property name strings. If extra properties are detected, an array of extra property names is returned. If no properties other than those in the allowed list are present on the object, the returned array will be of zero length.

mergeObjects(provided, overrides, defaults)

Merge properties from objects “provided”, “overrides”, and “defaults”. The intended use case is for functions that accept named arguments in an “args” object, but want to provide some default values and override other values. In that case, “provided” is what the caller specified, “overrides” are what the function wants to override, and “defaults” contains default values.

The function starts with the values in “defaults”, overrides them with the values in “provided”, and then overrides those with the values in “overrides”. For convenience, any of these objects may be falsey, in which case they will be ignored. The input objects are never modified, but properties in the returned object are not deep-copied.

For example:

mergeObjects(undefined, { 'objectMode': true }, { 'highWaterMark': 0 })

returns:

{ 'objectMode': true, 'highWaterMark': 0 }

For another example:

mergeObjects(
    { 'highWaterMark': 16, 'objectMode': 7 }, /* from caller */
    { 'objectMode': true },                   /* overrides */
    { 'highWaterMark': 0 });                  /* default */

returns:

{ 'objectMode': true, 'highWaterMark': 16 }


Contributing

See separate contribution guidelines.



@datastructures-js/set

build:? npm npm npm

extends javascript ES6 Set class and implements new functions in it.



Table of Contents

Install

API

require

import

javascript Set class

It extends ES6 Set class so it already has all the Set functionality.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set

Construction

constructor accepts an optional array of elements same like Set.

Example

.union(set)

applies union with another set and returns a set with all elements of the two.

https://en.wikipedia.org/wiki/Union_(set_theory)

union

params
name type
set Set
runtime explanation
O(n+m) n = number of elements of the first set

m = number of elements of the second set
return
EnhancedSet

Example

.intersect(set)

intersects the set with another set and returns a set with existing elements in both sets.

https://en.wikipedia.org/wiki/Intersection_(set_theory)

intersect

params
name type
set Set
runtime explanation
O(n) n = number of elements of the set
return
EnhancedSet

Example

.complement(set)

returns elements in a set and not in the other set relative to their union.

https://en.wikipedia.org/wiki/Complement_(set_theory)

complement

return
EnhancedSet

Example

.isSubsetOf(set)

checks if the set is a subset of another set and returns true if all elements of the set exist in the other set.

https://en.wikipedia.org/wiki/Subset

subset

params
name type
set Set
runtime explanation
O(n) n = number of elements of the set
return
boolean

Example

.isSupersetOf(set)

checks if the set is a superset of another set and returns true if all elements of the other set exist in the set.

https://en.wikipedia.org/wiki/Subset

subset

params
name type
set Set
runtime explanation
O(n) n = number of elements of the set

Example

.product(set, separator)

applies cartesian product between two sets. Default separator is empty string ’’.

https://en.wikipedia.org/wiki/Cartesian_product

product

params
name type
set Set
separator string
runtime explanation
O(n*m) n = number of elements of the first set

m = number of elements of the second set
return
EnhancedSet

Example

.power(m, separator)

applies cartesian product on the set itself. It projects the power concept on sets and also accepts a separator with default empty string value ’’.

product

params
name type
m number
separator string
runtime explanation
O(n^m) n = number of elements of the set

m = the multiplication power number
return
EnhancedSet

Example

.permutations(m, separator)

generates m permutations from the set elements. It also accepts a separator with default empty string value ’’.

perms

params
name type
m number
separator string
runtime explanation
O(n^m) n = number of elements of the set

m = the multiplication power number
return
EnhancedSet

Example

.equals(set)

checks if two sets are equal.

params
name type
set Set
runtime explanation
O(n) n = number of elements of the set
return
boolean

Example

.filter(cb)

filters the set based on a callback and returns the filtered set.

params
name type
cb function
runtime explanation
O(n) n = number of elements of the set
return
EnhancedSet

Example

.toArray()

converts the set into an array.

return
array

Example

.clone()

clones the set.

return
EnhancedSet

Build

grunt build


:rocket: fast-glob

Is a faster node-glob alternative.

:bulb: Highlights

  • :rocket: Fast by using Streams and Promises. Used readdir-enhanced and micromatch.
    • :beginner: User-friendly, since it supports multiple and negated patterns (['*', '!*.md']).
    • :vertical_traffic_light: Rational, because it doesn’t read excluded directories (!**/node_modules/**).
    • :gear: Universal, because it supports Synchronous, Promise and Stream API.
    • :money_with_wings: Economy, because it provides fs.Stats for matched path if you wanted.

If you want to thank me, or promote your Issue.

Donate

Sorry, but I have work and support for packages requires some time after work. I will be glad of your support and PR’s.

Install

npm install --save fast-glob

Usage

Asynchronous

Synchronous

Stream

API

fg(patterns, options)

fg.async(patterns, options)

Returns a Promise with an array of matching entries.

fg.sync(patterns, options)

Returns an array of matching entries.

fg.stream(patterns, options)

Returns a ReadableStream when the data event will be emitted with Entry.

patterns

  • Type: string|string[]

This package does not respect the order of patterns. First, all the negative patterns are applied, and only then the positive patterns.

options

  • Type: Object

See options section for more detailed information.

fg.generateTasks(patterns, options)

Return a set of tasks based on provided patterns. All tasks satisfy the Task interface:

Entry

The entry which can be a string if the stats option is disabled, otherwise fs.Stats with two additional path and depth properties.

Options

cwd

  • Type: string
    • Default: process.cwd()

The current working directory in which to search.

deep

  • Type: number|boolean
    • Default: true

The deep option can be set to true to traverse the entire directory structure, or it can be set to a number to only traverse that many levels deep.

For example, you have the following tree:

test
└── one
    └── two
        └── index.js

:book: If you specify a pattern with some base directory, this directory will not participate in the calculation of the depth of the found directories. Think of it as a cwd option.

ignore

  • Type: string[]
    • Default: []

An array of glob patterns to exclude matches.

dot

  • Type: boolean
    • Default: false

Allow patterns to match filenames starting with a period (files & directories), even if the pattern does not explicitly have a period in that spot.

stats

  • Type: boolean
    • Default: false

Return fs.Stats with two additional path and depth properties instead of a string.

onlyFiles

  • Type: boolean
    • Default: true

Return only files.

onlyDirectories

  • Type: boolean
    • Default: false

Return only directories.

followSymlinkedDirectories

  • Type: boolean
    • Default: true

Follow symlinked directories when expanding ** patterns.

unique

  • Type: boolean
    • Default: true

Prevent duplicate results.

markDirectories

  • Type: boolean
    • Default: false

Add a / character to directory entries.

absolute

  • Type: boolean
    • Default: false

Return absolute paths for matched entries.

:book: Note that you need to use this option if you want to use absolute negative patterns like ${__dirname}/*.md.

nobrace

  • Type: boolean
    • Default: false

Disable expansion of brace patterns ({a,b}, {1..3}).

brace

  • Type: boolean
    • Default: true

The nobrace option without double-negation. This option has a higher priority then nobrace.

noglobstar

  • Type: boolean
    • Default: false

Disable matching with globstars (**).

globstar

  • Type: boolean
    • Default: true

The noglobstar option without double-negation. This option has a higher priority then noglobstar.

noext

  • Type: boolean
    • Default: false

Disable extglob support (patterns like +(a|b)), so that extglobs are regarded as literal characters.

extension

  • Type: boolean
    • Default: true

The noext option without double-negation. This option has a higher priority then noext.

nocase

  • Type: boolean
    • Default: false

Disable a case-sensitive mode for matching files.

Examples
  • File System: test/file.md, test/File.md
  • Case-sensitive for test/file.* pattern (false): test/file.md
  • Case-insensitive for test/file.* pattern (true): test/file.md, test/File.md

case

  • Type: boolean
    • Default: true

The nocase option without double-negation. This option has a higher priority then nocase.

matchBase

  • Type: boolean
    • Default: false

Allow glob patterns without slashes to match a file path based on its basename. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.

transform

  • Type: Function
    • Default: null

Allows you to transform a path or fs.Stats object before sending to the array.

If you are using TypeScript, you probably want to specify your own type of the returned array.

How to exclude directory from reading?

You can use a negative pattern like this: !**/node_modules or !**/node_modules/**. Also you can use ignore option. Just look at the example below.

first/
├── file.md
└── second
    └── file.txt

If you don’t want to read the second directory, you must write the following pattern: !**/second or !**/second/**.

:warning: When you write !**/second/**/* it means that the directory will be read, but all the entries will not be included in the results.

You have to understand that if you write the pattern to exclude directories, then the directory will not be read under any circumstances.

How to use UNC path?

You cannot use UNC paths as patterns (due to syntax), but you can use them as cwd directory.

Compatible with node-glob?

Not fully, because fast-glob does not implement all options of node-glob. See table below.

node-glob fast-glob
cwd cwd
root
dot dot
nomount
mark markDirectories
nosort
nounique unique
nobrace nobrace or brace
noglobstar noglobstar or globstar
noext noext or extension
nocase nocase or case
matchBase matchbase
nodir onlyFiles
ignore ignore
follow followSymlinkedDirectories
realpath
absolute absolute

Benchmarks

Tech specs:

Server: Vultr Bare Metal

  • Processor: E3-1270v6 (8 CPU)
    • RAM: 32GB
    • Disk: SSD

You can see results here for latest release.

  • readdir-enhanced – Fast functional replacement for fs.readdir().
    • globby – User-friendly glob matching.
    • node-glob – «Standard» glob functionality for Node.js
    • bash-glob – Bash-powered globbing for node.js.
    • glob-stream – A Readable Stream interface over node-glob that used in the gulpjs.
    • tiny-glob – Tiny and extremely fast library to match files and folders using glob patterns.

Changelog

See the Releases section of our GitHub project for changelogs for each release version.

Linux OS X Windows Coverage Downloads
Build Status Windows Build Status Coverage Status npm module downloads per month


ignore

ignore is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore spec 2.22.1.

ignore is used by eslint, gitbook and many others.

Pay ATTENTION that minimatch (which used by fstream-ignore) does not follow the gitignore spec.

To filter filenames according to a .gitignore file, I recommend this npm package, ignore.

To parse an .npmignore file, you should use minimatch, because an .npmignore file is parsed by npm using minimatch and it does not work in the .gitignore way.

Tested on

ignore is fully tested, and has more than five hundreds of unit tests.

  • Linux + Node: 0.8 - 7.x
  • Windows + Node: 0.10 - 7.x, node < 0.10 is not tested due to the lack of support of appveyor.

Actually, ignore does not rely on any versions of node specially.

Since 4.0.0, ignore will no longer support node < 6 by default, to use in node < 6, require('ignore/legacy'). For details, see CHANGELOG.

Table Of Main Contents

Install

Usage

Filter the given paths

As the filter function

Win32 paths will be handled

Why another ignore?

  • ignore is a standalone module, and is much simpler so that it could easy work with other programs, unlike isaacs’s fstream-ignore which must work with the modules of the fstream family.

  • ignore only contains utility methods to filter paths according to the specified ignore rules, so
    • ignore never try to find out ignore rules by traversing directories or fetching from git configurations.
    • ignore don’t cares about sub-modules of git projects.
  • Exactly according to gitignore man page, fixes some known matching issues of fstream-ignore, such as:
    • /*.js’ should only match ‘a.js’, but not ‘abc/a.js’.
    • **/foo’ should match ‘foo’ anywhere.
    • Prevent re-including a file if a parent directory of that file is excluded.
    • Handle trailing whitespaces:
      • 'a '(one space) should not match 'a '(two spaces).
      • 'a \ ' matches 'a '
    • All test cases are verified with the result of git check-ignore.


Methods

.add(pattern: string | Ignore): this

.add(patterns: Array<string | Ignore>): this

  • pattern String | Ignore An ignore pattern string, or the Ignore instance
  • patterns Array<String | Ignore> Array of ignore patterns.

Adds a rule or several rules to the current manager.

Returns this

Notice that a line starting with '#'(hash) is treated as a comment. Put a backslash (\) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename.

pattern could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just ignore().add() the content of a ignore file:

pattern could also be an ignore instance, so that we could easily inherit the rules of another Ignore instance.

.addIgnoreFile(path)

REMOVED in 3.x for now.

To upgrade ignore@2.x up to 3.x, use

instead.

.filter(paths: Array<Pathname>): Array<Pathname>

Filters the given array of pathnames, and returns the filtered array.

  • paths Array.<Pathname> The array of pathnames to be filtered.

Pathname Conventions:

1. Pathname should be a path.relative()d pathname

Pathname should be a string that have been path.join()ed, or the return value of path.relative() to the current directory,

In other words, each Pathname here should be a relative path to the directory of the gitignore rules.

Suppose the dir structure is:

/path/to/your/repo
    |-- a
    |   |-- a.js
    |
    |-- .b
    |
    |-- .c
         |-- .DS_store

Then the paths might be like this:

2. filenames and dirnames

node-ignore does NO fs.stat during path matching, so for the example below:

Specially for people who develop some library based on node-ignore, it is important to understand that.

Usually, you could use glob with option.mark = true to fetch the structure of the current directory:

.ignores(pathname: Pathname): boolean

new in 3.2.0

Returns Boolean whether pathname should be ignored.

.createFilter()

Creates a filter function which could filter an array of paths with Array.prototype.filter.

Returns function(path) the filter function.

.test(pathname: Pathname) since 5.0.0

Returns TestResult

  • {ignored: true, unignored: false}: the pathname is ignored
  • {ignored: false, unignored: true}: the pathname is unignored
  • {ignored: false, unignored: false}: the pathname is never matched by any ignore rules.

options.ignorecase since 4.0.0

Similar as the core.ignorecase option of git-config, node-ignore will be case insensitive if options.ignorecase is set to true (the default value), otherwise case sensitive.

static ignore.isPathValid(pathname): boolean since 5.0.0

Check whether the pathname is an valid path.relative()d path according to the convention.

This method is NOT used to check if an ignore pattern is valid.




Upgrade Guide

Upgrade 4.x -> 5.x

Since 5.0.0, if an invalid Pathname passed into ig.ignores(), an error will be thrown, while ignore < 5.0.0 did not make sure what the return value was, as well as

See the convention here for details.

If there are invalid pathnames, the conversion and filtration should be done by users.

Upgrade 3.x -> 4.x

Since 4.0.0, ignore will no longer support node < 6, to use ignore in node < 6:

Upgrade 2.x -> 3.x

  • All options of 2.x are unnecessary and removed, so just remove them.
  • ignore() instance is no longer an EventEmitter, and all events are unnecessary and removed.
  • .addIgnoreFile() is removed, see the .addIgnoreFile section for details.



Collaborators

  • [@whitecolor](https://github.com/whitecolor) Alex
  • [@SamyPesse](https://github.com/SamyPesse) Samy Pessé
  • [@azproduction](https://github.com/azproduction) Mikhail Davydov
  • [@TrySound](https://github.com/TrySound) Bogdan Chadkin
  • [@JanMattner](https://github.com/JanMattner) Jan Mattner
  • [@ntwb](https://github.com/ntwb) Stephen Edgar
  • [@kasperisager](https://github.com/kasperisager) Kasper Isager
  • [@sandersn](https://github.com/sandersn) Nathan Shively-Sanders


@datastructures-js/binary-search-tree

build:? npm npm npm

Binary Search Tree & AVL Tree (Self Balancing Tree) implementation in javascript.

Binary Search Tree Binary Search Tree
AVL Tree
(Self Balancing Tree)
AVL Tree


Table of Contents

install

API

Both trees have the same interface except that AVL tree will maintain itself balanced by rotating the nodes that become unbalanced during insertion and deletion. If your code requires a strictly balanced tree that always benefits from the log(n) runtime of insert & remove, you should use the AVL one.

require

import

Construction

.insert(key, value)

inserts a node with key/value into the tree. Inserting an node with existing key, would update the existing node’s value with the new one. AVL tree will rotate nodes properly if the tree becomes unbalanced during insertion.

params
name type
key number or string
value object
return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode
runtime
O(log(n))

Example

.has(key)

checks if a node exists by its key.

params
name type
key number or string
return
boolean
runtime
O(log(n))

Example

.find(key)

finds a node in the tree by its key.

params
name type
key number or string
return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode
runtime
O(log(n))

Example

.min()

finds the node with min key in the tree.

return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode
runtime
O(log(n))

Example

.max()

finds the node with max key in the tree.

return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode
runtime
O(log(n))

Example

.root()

returns the root node of the tree.

return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode
runtime
O(1)

Example

.count()

returns the count of nodes in the tree.

return
number
runtime
O(1)

Example

.traverseInOrder(cb)

traverses the tree in order (left-node-right).

params
name type description
cb function called with each node
runtime
O(n)

Example

.traversePreOrder(cb)

traverses the tree pre order (node-left-right).

params
name type description
cb function called with each node
runtime
O(n)

Example

.traversePostOrder(cb)

traverses the tree post order (left-right-node).

params
name type description
cb function called with each node
runtime
O(n)

Example

.remove(key)

removes a node from the tree by its key. AVL tree will rotate nodes properly if the tree becomes unbalanced during deletion.

params
name type
key number or string
return
boolean
runtime
O(log(n))

Example

.clear()

clears the tree.

runtime
O(1)

Example

BinarySearchTreeNode

.getKey()

returns the node’s key that is used to compare with other nodes.

return
number or string

.setValue(value)

change the value that is associated with a node.

params
name type
value object

.getValue()

returns the value that is associated with a node.

return
object

.getLeft()

returns node’s left child node.

return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode

.getRight()

returns node’s right child node.

return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode

.getParent()

returns node’s parent node.

return
BinarySearchTree BinarySearchTreeNode
AvlTree AvlTreeNode

AvlTreeNode

extends BinarySearchTreeNode and adds the following methods:

.getHeight()

the height of the node in the tree. root height is 1.

return
number

.getLeftHeight()

the height of the left child. 0 if no left child.

return
number

.getRightHeight()

the height of the right child. 0 if no right child.

return
number

.calculateBalance()

returns the node’s balance by subtracting right height from left height.

return
number

Build

grunt build


to-regex-range NPM version NPM monthly downloads NPM total downloads Linux Build Status

Pass two numbers, get a regex-compatible source string for matching ranges. Validated against more than 2.78 million test assertions.

Install

Install with npm:

Install with yarn:

What does this do?


This libary generates the source string to be passed to new RegExp() for matching a range of numbers.

Example

A string is returned so that you can do whatever you need with it before passing it to new RegExp() (like adding ^ or $ boundaries, defining flags, or combining it another string).


Why use this library?


Convenience

Creating regular expressions for matching numbers gets deceptively complicated pretty fast.

For example, let’s say you need a validation regex for matching part of a user-id, postal code, social security number, tax id, etc:

  • regex for matching 1 => /1/ (easy enough)
  • regex for matching 1 through 5 => /[1-5]/ (not bad…)
  • regex for matching 1 or 5 => /(1|5)/ (still easy…)
  • regex for matching 1 through 50 => /([1-9]|[1-4][0-9]|50)/ (uh-oh…)
  • regex for matching 1 through 55 => /([1-9]|[1-4][0-9]|5[0-5])/ (no prob, I can do this…)
  • regex for matching 1 through 555 => /([1-9]|[1-9][0-9]|[1-4][0-9]{2}|5[0-4][0-9]|55[0-5])/ (maybe not…)
  • regex for matching 0001 through 5555 => /(0{3}[1-9]|0{2}[1-9][0-9]|0[1-9][0-9]{2}|[1-4][0-9]{3}|5[0-4][0-9]{2}|55[0-4][0-9]|555[0-5])/ (okay, I get the point!)

The numbers are contrived, but they’re also really basic. In the real world you might need to generate a regex on-the-fly for validation.

Learn more

If you’re interested in learning more about character classes and other regex features, I personally have always found regular-expressions.info to be pretty useful.

Heavily tested

As of April 27, 2017, this library runs 2,783,483 test assertions against generated regex-ranges to provide brute-force verification that results are indeed correct.

Tests run in ~870ms on my MacBook Pro, 2.5 GHz Intel Core i7.

Highly optimized

Generated regular expressions are highly optimized:

  • duplicate sequences and character classes are reduced using quantifiers
  • smart enough to use ? conditionals when number(s) or range(s) can be positive or negative
  • uses fragment caching to avoid processing the same exact string more than once


Usage

Add this library to your javascript application with the following line of code

The main export is a function that takes two integers: the min value and max value (formatted as strings or numbers).

Options

options.capture

Type: boolean

Deafault: undefined

Wrap the returned value in parentheses when there is more than one regex condition. Useful when you’re dynamically generating ranges.

options.shorthand

Type: boolean

Deafault: undefined

Use the regex shorthand for [0-9]:

options.relaxZeros

Type: boolean

Default: true

This option only applies to negative zero-padded ranges. By default, when a negative zero-padded range is defined, the number of leading zeros is relaxed using -0*.

Why are zeros relaxed for negative zero-padded ranges by default?

Consider the following.

Note that -001 and 100 are both three digits long.

In most zero-padding implementations, only a single leading zero is enough to indicate that zero-padding should be applied. Thus, the leading zeros would be “corrected” on the negative range in the example to -01, instead of -001, to make total length of each string no greater than the length of the largest number in the range (in other words, -001 is 4 digits, but 100 is only three digits).

If zeros were not relaxed by default, you might expect the resulting regex of the above pattern to match -001 - given that it’s defined that way in the arguments - but it wouldn’t. It would, however, match -01. This gets even more ambiguous with large ranges, like -01 to 1000000.

Thus, we relax zeros by default to provide a more predictable experience for users.

Examples

Range Result Compile time
toRegexRange('5, 5') 5 33μs
toRegexRange('5, 6') 5\|6 53μs
toRegexRange('29, 51') 29\|[34][0-9]\|5[01] 699μs
toRegexRange('31, 877') 3[1-9]\|[4-9][0-9]\|[1-7][0-9]{2}\|8[0-6][0-9]\|87[0-7] 711μs
toRegexRange('111, 555') 11[1-9]\|1[2-9][0-9]\|[2-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] 62μs
toRegexRange('-10, 10') -[1-9]\|-?10\|[0-9] 74μs
toRegexRange('-100, -10') -1[0-9]\|-[2-9][0-9]\|-100 49μs
toRegexRange('-100, 100') -[1-9]\|-?[1-9][0-9]\|-?100\|[0-9] 45μs
toRegexRange('001, 100') 0{2}[1-9]\|0[1-9][0-9]\|100 158μs
toRegexRange('0010, 1000') 0{2}1[0-9]\|0{2}[2-9][0-9]\|0[1-9][0-9]{2}\|1000 61μs
toRegexRange('1, 2') 1\|2 10μs
toRegexRange('1, 5') [1-5] 24μs
toRegexRange('1, 10') [1-9]\|10 23μs
toRegexRange('1, 100') [1-9]\|[1-9][0-9]\|100 30μs
toRegexRange('1, 1000') [1-9]\|[1-9][0-9]{1,2}\|1000 52μs
toRegexRange('1, 10000') [1-9]\|[1-9][0-9]{1,3}\|10000 47μs
toRegexRange('1, 100000') [1-9]\|[1-9][0-9]{1,4}\|100000 44μs
toRegexRange('1, 1000000') [1-9]\|[1-9][0-9]{1,5}\|1000000 49μs
toRegexRange('1, 10000000') [1-9]\|[1-9][0-9]{1,6}\|10000000 63μs

Heads up!

Order of arguments

When the min is larger than the max, values will be flipped to create a valid range:

Is effectively flipped to:

Steps / increments

This library does not support steps (increments). A pr to add support would be welcome.

History

v2.0.0 - 2017-04-21

New features

Adds support for zero-padding!

v1.0.0

Optimizations

Repeating ranges are now grouped using quantifiers. rocessing time is roughly the same, but the generated regex is much smaller, which should result in faster matching.

Attribution

Inspired by the python library range-regex.

About

  • expand-range: Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. See… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • micromatch: Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | homepage
  • repeat-element: Create an array by repeating the given value n times. | homepage
  • repeat-string: Repeat the given string n times. Fastest implementation for repeating a string. | homepage

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on April 27, 2017. # yargs-parser

ci NPM version Conventional Commits nycrc config on GitHub

The mighty option parser used by yargs.

visit the yargs website for more examples, and thorough usage instructions.

Example

or parse a string!

Convert an array of mixed types before passing to yargs-parser:

Deno Example

As of v19 yargs-parser supports Deno:

ESM Example

As of v19 yargs-parser supports ESM (both in Node.js and in the browser):

Node.js:

Browsers:

API

parser(args, opts={})

Parses command line arguments returning a simple mapping of keys and values.

expects:

  • args: a string or array of strings representing the options to parse.
  • opts: provide a set of hints indicating how args should be parsed:
    • opts.alias: an object representing the set of aliases for a key: {alias: {foo: ['f']}}.
    • opts.array: indicate that keys should be parsed as an array: {array: ['foo', 'bar']}.
      Indicate that keys should be parsed as an array and coerced to booleans / numbers:
      {array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}.
    • opts.boolean: arguments should be parsed as booleans: {boolean: ['x', 'y']}.
    • opts.coerce: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:
      {coerce: {foo: function (arg) {return modifiedArg}}}.
    • opts.config: indicate a key that represents a path to a configuration file (this file will be loaded and parsed).
    • opts.configObjects: configuration objects to parse, their properties will be set as arguments:
      {configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}.
    • opts.configuration: provide configuration options to the yargs-parser (see: configuration).
    • opts.count: indicate a key that should be used as a counter, e.g., -vvv = {v: 3}.
    • opts.default: provide default values for keys: {default: {x: 33, y: 'hello world!'}}.
    • opts.envPrefix: environment variables (process.env) with the prefix provided should be parsed.
    • opts.narg: specify that a key requires n arguments: {narg: {x: 2}}.
    • opts.normalize: path.normalize() will be applied to values set to this key.
    • opts.number: keys should be treated as numbers.
    • opts.string: keys should be treated as strings (even if they resemble a number -x 33).

returns:

  • obj: an object representing the parsed value of args
    • key/value: key value pairs for each argument and their aliases.
    • _: an array representing the positional arguments.
    • [optional] --: an array with arguments after the end-of-options flag --.

require(‘yargs-parser’).detailed(args, opts={})

Parses a command line string, returning detailed information required by the yargs engine.

expects:

  • args: a string or array of strings representing options to parse.
  • opts: provide a set of hints indicating how args, inputs are identical to require('yargs-parser')(args, opts={}).

returns:

  • argv: an object representing the parsed value of args
    • key/value: key value pairs for each argument and their aliases.
    • _: an array representing the positional arguments.
    • [optional] --: an array with arguments after the end-of-options flag --.
  • error: populated with an error object if an exception occurred during parsing.
  • aliases: the inferred list of aliases built by combining lists in opts.alias.
  • newAliases: any new aliases added via camel-case expansion:
    • boolean: { fooBar: true }
  • defaulted: any new argument created by opts.default, no aliases included.
    • boolean: { foo: true }
  • configuration: given by default settings and opts.configuration.

Configuration

The yargs-parser applies several automated transformations on the keys provided in args. These features can be turned on and off using the configuration field of opts.

short option groups

  • default: true.
  • key: short-option-groups.

Should a group of short-options be treated as boolean flags?

if disabled:

camel-case expansion

  • default: true.
  • key: camel-case-expansion.

Should hyphenated arguments be expanded into camel-case aliases?

if disabled:

dot-notation

  • default: true
  • key: dot-notation

Should keys that contain . be treated as objects?

if disabled:

parse numbers

  • default: true
  • key: parse-numbers

Should keys that look like numbers be treated as such?

if disabled:

parse positional numbers

  • default: true
  • key: parse-positional-numbers

Should positional keys that look like numbers be treated as such.

if disabled:

boolean negation

  • default: true
  • key: boolean-negation

Should variables prefixed with --no be treated as negations?

if disabled:

combine arrays

  • default: false
  • key: combine-arrays

Should arrays be combined when provided by both command line arguments and a configuration file.

duplicate arguments array

  • default: true
  • key: duplicate-arguments-array

Should arguments be coerced into an array when duplicated:

if disabled:

flatten duplicate arrays

  • default: true
  • key: flatten-duplicate-arrays

Should array arguments be coerced into a single array when duplicated:

if disabled:

greedy arrays

  • default: true
  • key: greedy-arrays

Should arrays consume more than one positional argument following their flag.

if disabled:

Note: in v18.0.0 we are considering defaulting greedy arrays to false.

nargs eats options

  • default: false
  • key: nargs-eats-options

Should nargs consume dash options as well as positional arguments.

negation prefix

  • default: no-
  • key: negation-prefix

The prefix to use for negated boolean variables.

if set to quux:

populate –

  • default: false.
  • key: populate--

Should unparsed flags be stored in -- or _.

If disabled:

If enabled:

set placeholder key

  • default: false.
  • key: set-placeholder-key.

Should a placeholder be added for keys not set via the corresponding CLI argument?

If disabled:

If enabled:

halt at non-option

  • default: false.
  • key: halt-at-non-option.

Should parsing stop at the first positional argument? This is similar to how e.g. ssh parses its command line.

If disabled:

If enabled:

strip aliased

  • default: false
  • key: strip-aliased

Should aliases be removed before returning results?

If disabled:

If enabled:

strip dashed

  • default: false
  • key: strip-dashed

Should dashed keys be removed before returning results? This option has no effect if camel-case-expansion is disabled.

If disabled:

If enabled:

unknown options as args

  • default: false
  • key: unknown-options-as-args

Should unknown options be treated like regular arguments? An unknown option is one that is not configured in opts.

If disabled

If enabled

Libraries in this ecosystem make a best effort to track Node.js’ release schedule. Here’s a post on why we think this is important.

Special Thanks

The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday’s hard work. Thanks substack beep boop /

ISC



snapdragon-node NPM version NPM monthly downloads NPM total downloads Linux Build Status

Snapdragon utility for creating a new AST node in custom code, such as plugins.

Install

Install with npm:

Usage

With snapdragon v0.9.0 and higher you can use this.node() to create a new Node, whenever it makes sense.

API

Node

Create a new AST Node with the given val and type.

Params

  • val {String|Object}: Pass a matched substring, or an object to merge onto the node.
  • type {String}: The node type to use when val is a string.
  • returns {Object}: node instance

Example

.isNode

Returns true if the given value is a node.

Params

  • node {Object}
  • returns {Boolean}

Example

.define

Define a non-enumberable property on the node instance. Useful for adding properties that shouldn’t be extended or visible during debugging.

Params

  • name {String}
  • val {any}
  • returns {Object}: returns the node instance

Example

.isEmpty

Returns true if node.val is an empty string, or node.nodes does not contain any non-empty text nodes.

Params

  • fn {Function}: (optional) Filter function that is called on node and/or child nodes. isEmpty will return false immediately when the filter function returns false on any nodes.
  • returns {Boolean}

Example

.push

Given node foo and node bar, push node bar onto foo.nodes, and set foo as bar.parent.

Params

  • node {Object}
  • returns {Number}: Returns the length of node.nodes

Example

.unshift

Given node foo and node bar, unshift node bar onto foo.nodes, and set foo as bar.parent.

Params

  • node {Object}
  • returns {Number}: Returns the length of node.nodes

Example

.pop

Pop a node from node.nodes.

  • returns {Number}: Returns the popped node

Example

.shift

Shift a node from node.nodes.

  • returns {Object}: Returns the shifted node

Example

.remove

Remove node from node.nodes.

Params

  • node {Object}
  • returns {Object}: Returns the removed node.

Example

.find

Get the first child node from node.nodes that matches the given type. If type is a number, the child node at that index is returned.

Params

  • type {String}
  • returns {Object}: Returns a child node or undefined.

Example

.isType

Return true if the node is the given type.

Params

  • type {String}
  • returns {Boolean}

Example

.hasType

Return true if the node.nodes has the given type.

Params

  • type {String}
  • returns {Boolean}

Example

  • returns {Array}

Example

  • returns {Number}

Example

  • returns {Object}

Example

  • returns {Object}

Example

  • returns {Object}: The first node, or undefiend

Example

  • returns {Object}: The last node, or undefiend

Example

  • returns {Object}: The last node, or undefiend

Example

Release history

Changelog entries are classified using the following labels from keep-a-changelog:

  • added: for new features
  • changed: for changes in existing functionality
  • deprecated: for once-stable features removed in upcoming releases
  • removed: for deprecated features removed in this release
  • fixed: for any bug fixes

Custom labels used in this changelog:

  • dependencies: bumps dependencies
  • housekeeping: code re-organization, minor edits, or other changes that don’t fit in one of the other categories.

[2.0.0] - 2017-05-01

Changed

  • .unshiftNode was renamed to .unshift
  • .pushNode was renamed to .push
  • .getNode was renamed to .find

Added

[0.1.0]

First release.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on June 25, 2017.

TypeScript ESTree

A parser that converts TypeScript source code into an ESTree-compatible form

CI NPM Version NPM Downloads

Getting Started

You can find our Getting Started docs here

About

This parser is somewhat generic and robust, and could be used to power any use-case which requires taking TypeScript source code and producing an ESTree-compatible AST.

In fact, it is already used within these hyper-popular open-source projects to power their TypeScript support:

  • ESLint, the pluggable linting utility for JavaScript and JSX
  • Prettier, an opinionated code formatter

Installation

API

Parsing

parse(code, options)

Parses the given string of code with the options provided and returns an ESTree-compatible AST.

interface ParseOptions {
  /**
   * create a top-level comments array containing all comments
   */
  comment?: boolean;

  /**
   * An array of modules to turn explicit debugging on for.
   * - 'typescript-eslint' is the same as setting the env var `DEBUG=typescript-eslint:*`
   * - 'eslint' is the same as setting the env var `DEBUG=eslint:*`
   * - 'typescript' is the same as setting `extendedDiagnostics: true` in your tsconfig compilerOptions
   *
   * For convenience, also supports a boolean:
   * - true === ['typescript-eslint']
   * - false === []
   */
  debugLevel?: boolean | ('typescript-eslint' | 'eslint' | 'typescript')[];

  /**
   * Cause the parser to error if it encounters an unknown AST node type (useful for testing).
   * This case only usually occurs when TypeScript releases new features.
   */
  errorOnUnknownASTType?: boolean;

  /**
   * Absolute (or relative to `cwd`) path to the file being parsed.
   */
  filePath?: string;

  /**
   * Enable parsing of JSX.
   * For more details, see https://www.typescriptlang.org/docs/handbook/jsx.html
   *
   * NOTE: this setting does not effect known file types (.js, .jsx, .ts, .tsx, .json) because the
   * TypeScript compiler has its own internal handling for known file extensions.
   *
   * For the exact behavior, see https://github.com/typescript-eslint/typescript-eslint/tree/master/packages/parser#parseroptionsecmafeaturesjsx
   */
  jsx?: boolean;

  /**
   * Controls whether the `loc` information to each node.
   * The `loc` property is an object which contains the exact line/column the node starts/ends on.
   * This is similar to the `range` property, except it is line/column relative.
   */
  loc?: boolean;

  /*
   * Allows overriding of function used for logging.
   * When value is `false`, no logging will occur.
   * When value is not provided, `console.log()` will be used.
   */
  loggerFn?: Function | false;

  /**
   * Controls whether the `range` property is included on AST nodes.
   * The `range` property is a [number, number] which indicates the start/end index of the node in the file contents.
   * This is similar to the `loc` property, except this is the absolute index.
   */
  range?: boolean;

  /**
   * Set to true to create a top-level array containing all tokens from the file.
   */
  tokens?: boolean;

  /*
   * The JSX AST changed the node type for string literals
   * inside a JSX Element from `Literal` to `JSXText`.
   * When value is `true`, these nodes will be parsed as type `JSXText`.
   * When value is `false`, these nodes will be parsed as type `Literal`.
   */
  useJSXTextNode?: boolean;
}

const PARSE_DEFAULT_OPTIONS: ParseOptions = {
  comment: false,
  errorOnUnknownASTType: false,
  filePath: 'estree.ts', // or 'estree.tsx', if you pass jsx: true
  jsx: false,
  loc: false,
  loggerFn: undefined,
  range: false,
  tokens: false,
  useJSXTextNode: false,
};

declare function parse(
  code: string,
  options: ParseOptions = PARSE_DEFAULT_OPTIONS,
): TSESTree.Program;

Example usage:

parseAndGenerateServices(code, options)

Parses the given string of code with the options provided and returns an ESTree-compatible AST. Accepts additional options which can be used to generate type information along with the AST.

interface ParseAndGenerateServicesOptions extends ParseOptions {
  /**
   * Causes the parser to error if the TypeScript compiler returns any unexpected syntax/semantic errors.
   */
  errorOnTypeScriptSyntacticAndSemanticIssues?: boolean;

  /**
   * ***EXPERIMENTAL FLAG*** - Use this at your own risk.
   *
   * Causes TS to use the source files for referenced projects instead of the compiled .d.ts files.
   * This feature is not yet optimized, and is likely to cause OOMs for medium to large projects.
   *
   * This flag REQUIRES at least TS v3.9, otherwise it does nothing.
   *
   * See: https://github.com/typescript-eslint/typescript-eslint/issues/2094
   */
  EXPERIMENTAL_useSourceOfProjectReferenceRedirect?: boolean;

  /**
   * When `project` is provided, this controls the non-standard file extensions which will be parsed.
   * It accepts an array of file extensions, each preceded by a `.`.
   */
  extraFileExtensions?: string[];

  /**
   * Absolute (or relative to `tsconfigRootDir`) path to the file being parsed.
   * When `project` is provided, this is required, as it is used to fetch the file from the TypeScript compiler's cache.
   */
  filePath?: string;

  /**
   * Allows the user to control whether or not two-way AST node maps are preserved
   * during the AST conversion process.
   *
   * By default: the AST node maps are NOT preserved, unless `project` has been specified,
   * in which case the maps are made available on the returned `parserServices`.
   *
   * NOTE: If `preserveNodeMaps` is explicitly set by the user, it will be respected,
   * regardless of whether or not `project` is in use.
   */
  preserveNodeMaps?: boolean;

  /**
   * Absolute (or relative to `tsconfigRootDir`) paths to the tsconfig(s).
   * If this is provided, type information will be returned.
   */
  project?: string | string[];

  /**
   * If you provide a glob (or globs) to the project option, you can use this option to ignore certain folders from
   * being matched by the globs.
   * This accepts an array of globs to ignore.
   *
   * By default, this is set to ["/node_modules/"]
   */
  projectFolderIgnoreList?: string[];

  /**
   * The absolute path to the root directory for all provided `project`s.
   */
  tsconfigRootDir?: string;

  /**
   ***************************************************************************************
   * IT IS RECOMMENDED THAT YOU DO NOT USE THIS OPTION, AS IT CAUSES PERFORMANCE ISSUES. *
   ***************************************************************************************
   *
   * When passed with `project`, this allows the parser to create a catch-all, default program.
   * This means that if the parser encounters a file not included in any of the provided `project`s,
   * it will not error, but will instead parse the file and its dependencies in a new program.
   */
  createDefaultProgram?: boolean;
}

interface ParserServices {
  program: ts.Program;
  esTreeNodeToTSNodeMap: WeakMap<TSESTree.Node, ts.Node | ts.Token>;
  tsNodeToESTreeNodeMap: WeakMap<ts.Node | ts.Token, TSESTree.Node>;
  hasFullTypeInformation: boolean;
}

interface ParseAndGenerateServicesResult<T extends TSESTreeOptions> {
  ast: TSESTree.Program;
  services: ParserServices;
}

const PARSE_AND_GENERATE_SERVICES_DEFAULT_OPTIONS: ParseOptions = {
  ...PARSE_DEFAULT_OPTIONS,
  errorOnTypeScriptSyntacticAndSemanticIssues: false,
  extraFileExtensions: [],
  preserveNodeMaps: false, // or true, if you do not set this, but pass `project`
  project: undefined,
  projectFolderIgnoreList: ['/node_modules/'],
  tsconfigRootDir: process.cwd(),
};

declare function parseAndGenerateServices(
  code: string,
  options: ParseOptions = PARSE_DEFAULT_OPTIONS,
): ParseAndGenerateServicesResult;

Example usage:

parseWithNodeMaps(code, options)

Parses the given string of code with the options provided and returns both the ESTree-compatible AST as well as the node maps. This allows you to work with both ASTs without the overhead of types that may come with parseAndGenerateServices.

Example usage:

TSESTree, AST_NODE_TYPES and AST_TOKEN_TYPES

Types for the AST produced by the parse functions.

  • TSESTree is a namespace which contains object types representing all of the AST Nodes produced by the parser.
  • AST_NODE_TYPES is an enum which provides the values for every single AST node’s type property.
  • AST_TOKEN_TYPES is an enum which provides the values for every single AST token’s type property.

If you use a non-supported version of TypeScript, the parser will log a warning to the console.

Please ensure that you are using a supported version before submitting any issues/bug reports.

Reporting Issues

Please check the current list of open and known issues and ensure the issue has not been reported before. When creating a new issue provide as much information about your environment as possible. This includes:

  • TypeScript version
  • The typescript-estree version

AST Alignment Tests

A couple of years after work on this parser began, the TypeScript Team at Microsoft began officially supporting TypeScript parsing via Babel.

I work closely with the TypeScript Team and we are gradually aligning the AST of this project with the one produced by Babel’s parser. To that end, I have created a full test harness to compare the ASTs of the two projects which runs on every PR, please see the code for more details.

Debugging

If you encounter a bug with the parser that you want to investigate, you can turn on the debug logging via setting the environment variable: DEBUG=typescript-eslint:*. I.e. in this repo you can run: DEBUG=typescript-eslint:* yarn lint.

Contributing

See the contributing guide here



@datastructures-js/graph

build:? npm npm npm

graph


Table of Contents

install

API

require

import

create a graph

creates an empty graph

Example

.addVertex(key, value)

adds a vertex to the graph.

params
name type
key number or string
value object
return
Vertex
runtime
O(1)

Example

.hasVertex(key)

checks if the graph has a vertex by its key.
params
name type
key number or string
return
boolean
runtime
O(1)

Example

.verticesCount()

gets the number of vertices in the graph.

return
number
runtime
O(1)

Example

.addEdge(srcKey, destKey, weight)

adds an edge with a weight between two existings vertices. Default weight is 1 if not defined. The edge is a direction from source to destination when added in a directed graph, and a connecting two-way edge when added in a graph.

params
name type description
srcKey number or string the source vertex key
destKey number or string the destination vertex key
weight number the weight of the edge
runtime
O(1)

Example

.hasEdge(srcKey, destKey)

checks if the graph has an edge between two existing vertices. In directed graph, it returns true only if there is a direction from source to destination.

params
name type description
srcKey number or string the source vertex key
destKey number or string the destination vertex key
return
boolean
runtime
O(1)

Example

.edgesCount()

gets the number of edges in the graph.

return
number
runtime
O(1)

Example

.getWeight(srcKey, destKey)

gets the edge’s weight between two vertices in the graph. If there is no direct edge between the two vertices, it returns null. It also returns 0 if the source key is equal to destination key.

params
name type description
srcKey number or string the source vertex key
destKey number or string the destination vertex key
return
number
runtime
O(1)

Example

.removeVertex(key)

removes a vertex with all its edges from the graph by its key.

params
name type description
key number or string the vertex key
return
boolean
runtime
Graph O(K) : K = number of connected edges to the vertex
Directed Graph O(E) : E = number of edges in the graph

Example

.removeEdge(srcKey, destKey)

removes an edge between two existing vertices

params
name type description
srcKey number or string the source vertex key
destKey number or string the destination vertex key
return
boolean
runtime
O(1)

Example

.removeEdges(key)

removes all connected edges to a vertex by its key.

params
name type description
key number or string the vertex key
return description
number number of removed edges
runtime
Graph O(K) : K = number of connected edges to the vertex
Directed Graph O(E) : E = number of edges in the graph

Example

.traverseDfs(srcKey, cb)

traverses the graph using the depth-first recursive search.

params
name type description
srcKey number or string the starting vertex key
cb function the callback that is called with each vertex
runtime
O(V) : V = the number of vertices in the graph

Example

.traverseBfs(srcKey, cb)

traverses the graph using the breadth-first search with a queue.

params
name type description
srcKey number or string the starting vertex key
cb function the callback that is called with each vertex
runtime
O(V) : V = the number of vertices in the graph

Example

.clear()

clears all vertices and edges in the graph.

runtime
O(1)

Example

Vertex

.getKey()

returns the vertex key.

return
string or number

.getValue()

returns the vertex associated value.

return
object

Build

grunt build


extglob NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob patterns.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

  • Convert an extglob string to a regex-compatible string.
  • More complete (and correct) support than minimatch (minimatch fails a large percentage of the extglob tests)
  • Handles negation patterns
  • Handles nested patterns
  • Organized code base, easy to maintain and make changes when edge cases arise
  • As you can see by the benchmarks, extglob doesn’t pay with speed for it’s completeness, accuracy and quality.

Heads up!: This library only supports extglobs, to handle full glob patterns and other extended globbing features use micromatch instead.

Usage

The main export is a function that takes a string and options, and returns an object with the parsed AST and the compiled .output, which is a regex-compatible string that can be used for matching.

Extglob cheatsheet

pattern regex equivalent description
?(pattern-list) (...|...)? Matches zero or one occurrence of the given pattern(s)
*(pattern-list) (...|...)* Matches zero or more occurrences of the given pattern(s)
+(pattern-list) (...|...)+ Matches one or more occurrences of the given pattern(s)
@(pattern-list) (...|...) 1 Matches one of the given pattern(s)
!(pattern-list) N/A Matches anything except one of the given pattern(s)

API

extglob

Convert the given extglob pattern into a regex-compatible string. Returns an object with the compiled result and the parsed AST.

Params

  • pattern {String}
  • options {Object}
  • returns {String}

Example

.match

Takes an array of strings and an extglob pattern and returns a new array that contains only the strings that match the pattern.

Params

  • list {Array}: Array of strings to match
  • pattern {String}: Extglob pattern
  • options {Object}
  • returns {Array}: Returns an array of matches

Example

.isMatch

Returns true if the specified string matches the given extglob pattern.

Params

  • string {String}: String to match
  • pattern {String}: Extglob pattern
  • options {String}
  • returns {Boolean}

Example

.contains

Returns true if the given string contains the given pattern. Similar to .isMatch but the pattern can match any part of the string.

Params

  • str {String}: The string to match.
  • pattern {String}: Glob pattern to use for matching.
  • options {Object}
  • returns {Boolean}: Returns true if the patter matches any part of str.

Example

.matcher

Takes an extglob pattern and returns a matcher function. The returned function takes the string to match as its only argument.

Params

  • pattern {String}: Extglob pattern
  • options {String}
  • returns {Boolean}

Example

.create

Convert the given extglob pattern into a regex-compatible string. Returns an object with the compiled result and the parsed AST.

Params

  • str {String}
  • options {Object}
  • returns {String}

Example

.capture

Returns an array of matches captured by pattern in string, or null if the pattern did not match.

Params

  • pattern {String}: Glob pattern to use for matching.
  • string {String}: String to match
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns an array of captures if the string matches the glob pattern, otherwise null.

Example

.makeRe

Create a regular expression from the given pattern and options.

Params

  • pattern {String}: The pattern to convert to regex.
  • options {Object}
  • returns {RegExp}

Example

Options

Available options are based on the options from Bash (and the option names used in bash).

options.nullglob

Type: boolean

Default: undefined

When enabled, the pattern itself will be returned when no matches are found.

options.nonull

Alias for options.nullglob, included for parity with minimatch.

options.cache

Type: boolean

Default: undefined

Functions are memoized based on the given glob patterns and options. Disable memoization by setting options.cache to false.

options.failglob

Type: boolean

Default: undefined

Throw an error is no matches are found.

Benchmarks

Last run on December 21, 2017

Differences from Bash

This library has complete parity with Bash 4.3 with only a couple of minor differences.

  • In some cases Bash returns true if the given string “contains” the pattern, whereas this library returns true if the string is an exact match for the pattern. You can relax this by setting options.contains to true.
  • This library is more accurate than Bash and thus does not fail some of the tests that Bash 4.3 still lists as failing in their unit tests

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • braces: Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support… more | homepage
  • expand-brackets: Expand POSIX bracket expressions (character classes) in glob patterns. | homepage
  • expand-range: Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. Used… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • micromatch: Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | homepage
Commits Contributor
49 jonschlinkert
2 isiahmeadows
1 doowb
1 devongovett
1 mjbvz
1 shinnn

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on December 21, 2017.


  1. @ isn “’t a RegEx character.”



@dojo/widgets

Build Status codecov npm version

A suite of pre-built Dojo widgets, ready to use in your web application. These widgets are built using Dojo’s widget authoring system [(@dojo/framework/core)](https://github.com/dojo/framework/blob/master/src/core/README.md).

Usage

To use @dojo/widgets in your project, you will need to install the package:

This package contains all of the widgets in this repo.

All of the widgets are on the same release schedule, that is to say, that we release all widgets at the same time. Minor releases may include new widgets and/or features, whereas patch releases may contain fixes to more than 1 widget.

To use a widget in your application, you will need to import each widget individually:

Each widget module has a default export of the widget itself, as well as named exports for things such as properties specific to the widget:

Because each widget is a separate module, when you create a release build of your application, you will only include the widgets that you have explicitly imported. This allows our dojo cli build tooling to make sure that the production build of your application only includes the widgets you use and is as small as possible.

Features

  • All widgets are supported in all evergreen browsers (Chrome, Edge, Firefox, IE11+, and Safari) as well as popular mobile browsers (Mobile Safari, Chrome on Android).

  • All widgets are designed to be accessible. If custom ARIA semantics are required, widgets have an aria property that may be passed an object with custom aria-* attributes.

  • All widgets are fully themeable. Example themes are available in the [@dojo/themes](https://github.com/dojo/themes) repository.

  • All widgets support internationalization (i18n)

Widgets

Live examples of current widgets are available in the widget showcase.

Form widgets

Button

Calendar

Checkbox/Toggle

ComboBox

Label

Listbox

Radio

RangeSlider

Select

NativeSelect

Slider

TextArea

TextInput

TimePicker

Layout widgets

Accordion

SlidePane

SplitPane

TabController

TitlePane

Misc widgets

Grid

Dialog

GlobalEvent

Icon

Progress

Toolbar

Tooltip

Conventions

EventHandlers

You can register event handlers that get called when the corresponding events occur by passing the handlers into a widget’s properties. The naming convention for event handlers is as follows:

  • if the parent of the widget has the power to decide if an event is successful, i.e. can cancel the event, then the child widget will call an event handler in the following format:

onRequest[X], e.g. for a close event, the event handler called by the child widget must be called onRequestClose

Here the child widget is requesting that the close event take place.

  • for events that will occur regardless of child/parent interaction, then the Request naming convention is dropped:

on[X], e.g. for a dismiss event, the event handler called by the child widget must be called onDismiss

Icons

We use font awesome for icons. Where a theme requires specific icons that are not part of the Font Awesome set, then those themes will ship their own icons.

Icon fonts are generated using IcoMoon. If a new icon is required, it is possible to upload the current dojoSelect.json from src/theme/fonts and then add new icons by selecting from the Font Awesome library. After selecting the new icons from the library, merge them down into the current icon set, then delete the rest of the Font Awesome icons that were added by IcoMoon. After this you can export and download them as a zip. Once downloaded you will also need to unzip them and replace the font files (svg, woff, ttf) in src/theme/fonts. Now download the new selection JSON file from the projects page of IcoMoon and replace the current dojoSelection.json file.

To make use of the new icons it is necessary to update the icon.m.css file in the theme folder with the new unicode icon like so:

.newIcon:before {
    content: "\f123";
}

Where \f123 is the unicode character for the new icon. To check the new icon works you can render it in the src/widgets/examples/icon/Basic.tsx to make sure everything renders correctly.

There is an icon widget that aids in creating in proper semantics and provides type-checking for the type of icon.

Coding conventions

px vs. em - we specify font sizes in px. When creating a widget, spacing (margin, padding) should be specified using px unless the design calls for proportional spacing, in which case em can be used.

Z-index layering

Widgets adhere to a basic convention for using specific ranges of z-index values based on function and visual context. This convention is followed in both individual widget CSS and in the Dojo theme styles. These values can be overridden in a custom theme if necessary since no z-index values are set in fixed styles.

The range definitions are as follows:

  • 0 - 100: Any specific component layering, e.g. a caption over an image.
  • 100 - 200: Fixed position elements. Fixed headers and footers are clear examples of fixed page elements, but it could also include a drag-and-drop element in a drag state.
  • 200 - 300: Partial-page overlays such as Slide panes.
  • 300 - 400: Full-page overlays such as Dialogs.
  • 400 - 500: Body level popups, tooltips and alerts.

How to customize a widget

There are many ways in which you can customize the behavior and appearance of Dojo widgets. See the core README for examples of how to customize the theme or a specific CSS class of a widget.

Or can you write your own widget that extends an official widget.

Extending widgets

Because all Dojo widgets are Classes, you can simply extend the Class to add or change its behavior.

Dojo widgets provide standard extension points to allow you to customize their behavior. For more details, please refer to the widget authoring system.

Individual widgets also provide certain types of extension points where applicable: - render*: Large render functions are split up into multiple smaller pieces that can be more easily overridden to create custom vdom. - getModifierClasses: Modify the array of conditionally applied classes like css.selected or css.disabled. Not all widgets include these extension points, and some have additional overridable methods.

Widget Variants

When writing a widget variant, ie. RaisedButton, you should ensure that you use theme.compose from the widget theme middleware. This allows your variant to interit css from it’s base widget whilst allowing it to be themed separately.

How do I contribute?

We appreciate your interest! Please see the Dojo Meta Repository for the Contributing Guidelines and Style Guide.

Note that all changes to widgets should work with the dojo theme. To test this start the example page (instructions at Installation section) and select the dojo option at the top of the page.

Installation

To start working with this package, clone the repository and run npm install.

In order to build the project run npm run build.

Testing

Test cases MUST be written using Intern using the Object test interface and Assert assertion interface.

90% branch coverage MUST be provided for all code submitted to this repository, as reported by istanbul’s combined coverage results for all supported platforms.

To test locally in node run:

npm run test

Widget Examples

The Dojo widget examples application is located in src/examples.

To add a new example, create a directory that matches the directory name of the widget e.g. src/examples/src/widgets/text-input. Each widget must have an example called Basic.tsx and an entry in the src/examples/src/config.ts keyed by the name of the widget directory. The widget example should import widgets from @dojo/widgets and not via a relative import. It is very important that the config entry name (ie. text-input) matches the folder name / css file name of the widget otherwise the doc build will fail.

  • filename: The name of the widget module, defaults to index
  • overview: The configuration for the basic example including the imported Basic module and the example filename (has to be 'Basic')
  • examples: Additional examples for the widget, an array of configuration that specifies the title, description, module and example filename.

To view the examples locally run npm run dev in the root directory and navigate to http://localhost:9999, this starts the examples in watch mode and should update widget module are changed. Note that you do not have to install dependencies in the src/examples project, this will result in an error.

Widget Documentation

The widget examples and documentation is automatically generated by the examples application when built with the docs feature flag set to true. The site relies on a few conventions in order to be able do this:

  1. A widgets properties interface must be the name of the widget with a suffix of Properties, e.g. for text-input the properties interface would be TextInputProperties
  2. The widget properties must be exported to ensure they are visible in the generated widget documentation.
  3. All themeable styles must be added to the corresponding theme css module in src/theme and match the name of the widget directory e.g. text-input.m.css
  4. For properties description docs must be included inline above each property, e.g.
  5. All widgets must have a README.md file in their root directory.

To build the documentation run npm run build:docs and to build and serve the documentation in watch mode run npm run build:docs:dev

Running the examples on Codesandbox

The examples also run on Codesanbox, to run the examples on the master branch go to https://codesandbox.io/s/github/dojo/widgets/tree/master/src/examples. To run the examples for a specific user/branch/tag adjust the url as required.

Licensing information



TweetNaCl.js

Port of TweetNaCl / NaCl to JavaScript for modern browsers and Node.js. Public domain.

Build Status

Demo: https://tweetnacl.js.org

:warning: The library is stable and API is frozen, however it has not been independently reviewed. If you can help reviewing it, please contact me.



Documentation

Overview

The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it.

There are two versions, you can use either of them:

  • nacl.js is the port of TweetNaCl with minimum differences from the original + high-level API.

  • nacl-fast.js is like nacl.js, but with some functions replaced with faster versions.

Installation

You can install TweetNaCl.js via a package manager:

Bower:

$ bower install tweetnacl

NPM:

npm install tweetnacl

or download source code.

Usage

All API functions accept and return bytes as Uint8Arrays. If you need to encode or decode strings, use functions from https://github.com/dchest/tweetnacl-util-js or one of the more robust codec packages.

In Node.js v4 and later Buffer objects are backed by Uint8Arrays, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still Uint8Arrays, so if you need Buffers, you’ll have to convert them manually; make sure to convert using copying: new Buffer(array), instead of sharing: new Buffer(array.buffer), because some functions return subarrays of their buffers.

Public-key authenticated encryption (box)

Implements curve25519-xsalsa20-poly1305.

nacl.box.keyPair()

Generates a new random key pair for box and returns it as an object with publicKey and secretKey members:

{
   publicKey: ...,  // Uint8Array with 32-byte public key
   secretKey: ...   // Uint8Array with 32-byte secret key
}

nacl.box.keyPair.fromSecretKey(secretKey)

Returns a key pair for box with public key corresponding to the given secret key.

nacl.box(message, nonce, theirPublicKey, mySecretKey)

Encrypt and authenticates message using peer’s public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair.

Returns an encrypted and authenticated message, which is nacl.box.overheadLength longer than the original message.

nacl.box.open(box, nonce, theirPublicKey, mySecretKey)

Authenticates and decrypts the given box with peer’s public key, our secret key, and the given nonce.

Returns the original message, or false if authentication fails.

nacl.box.before(theirPublicKey, mySecretKey)

Returns a precomputed shared key which can be used in nacl.box.after and nacl.box.open.after.

nacl.box.after(message, nonce, sharedKey)

Same as nacl.box, but uses a shared key precomputed with nacl.box.before.

nacl.box.open.after(box, nonce, sharedKey)

Same as nacl.box.open, but uses a shared key precomputed with nacl.box.before.

nacl.box.publicKeyLength = 32

Length of public key in bytes.

nacl.box.secretKeyLength = 32

Length of secret key in bytes.

nacl.box.sharedKeyLength = 32

Length of precomputed shared key in bytes.

nacl.box.nonceLength = 24

Length of nonce in bytes.

nacl.box.overheadLength = 16

Length of overhead added to box compared to original message.

Secret-key authenticated encryption (secretbox)

Implements xsalsa20-poly1305.

nacl.secretbox(message, nonce, key)

Encrypt and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key.

Returns an encrypted and authenticated message, which is nacl.secretbox.overheadLength longer than the original message.

nacl.secretbox.open(box, nonce, key)

Authenticates and decrypts the given secret box using the key and the nonce.

Returns the original message, or false if authentication fails.

nacl.secretbox.keyLength = 32

Length of key in bytes.

nacl.secretbox.nonceLength = 24

Length of nonce in bytes.

nacl.secretbox.overheadLength = 16

Length of overhead added to secret box compared to original message.

Scalar multiplication

Implements curve25519.

nacl.scalarMult(n, p)

Multiplies an integer n by a group element p and returns the resulting group element.

nacl.scalarMult.base(n)

Multiplies an integer n by a standard group element and returns the resulting group element.

nacl.scalarMult.scalarLength = 32

Length of scalar in bytes.

nacl.scalarMult.groupElementLength = 32

Length of group element in bytes.

Signatures

Implements ed25519.

nacl.sign.keyPair()

Generates new random key pair for signing and returns it as an object with publicKey and secretKey members:

{
   publicKey: ...,  // Uint8Array with 32-byte public key
   secretKey: ...   // Uint8Array with 64-byte secret key
}

nacl.sign.keyPair.fromSecretKey(secretKey)

Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by nacl.sign.keyPair or nacl.sign.keyPair.fromSeed.

nacl.sign.keyPair.fromSeed(seed)

Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use nacl.sign.keyPair to generate a new key pair from a random seed.

nacl.sign(message, secretKey)

Signs the message using the secret key and returns a signed message.

nacl.sign.open(signedMessage, publicKey)

Verifies the signed message and returns the message without signature.

Returns null if verification failed.

nacl.sign.detached(message, secretKey)

Signs the message using the secret key and returns a signature.

nacl.sign.detached.verify(message, signature, publicKey)

Verifies the signature for the message and returns true if verification succeeded or false if it failed.

nacl.sign.publicKeyLength = 32

Length of signing public key in bytes.

nacl.sign.secretKeyLength = 64

Length of signing secret key in bytes.

nacl.sign.seedLength = 32

Length of seed for nacl.sign.keyPair.fromSeed in bytes.

nacl.sign.signatureLength = 64

Length of signature in bytes.

Hashing

Implements SHA-512.

nacl.hash(message)

Returns SHA-512 hash of the message.

nacl.hash.hashLength = 64

Length of hash in bytes.

Random bytes generation

nacl.randomBytes(length)

Returns a Uint8Array of the given length containing random bytes of cryptographic quality.

Implementation note

TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on:

  • window.crypto.getRandomValues (WebCrypto standard)
  • window.msCrypto.getRandomValues (Internet Explorer 11)
  • crypto.randomBytes (Node.js)

If the platform doesn’t provide a suitable PRNG, the following functions, which require random numbers, will throw exception:

  • nacl.randomBytes
  • nacl.box.keyPair
  • nacl.sign.keyPair

Other functions are deterministic and will continue working.

If a platform you are targeting doesn’t implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not Math.random!), and you know what you are doing, you can plug it into TweetNaCl.js like this:

nacl.setPRNG(function(x, n) {
  // ... copy n random bytes into x ...
});

Note that nacl.setPRNG completely replaces internal random byte generator with the one provided.

Constant-time comparison

nacl.verify(x, y)

Compares x and y in constant time and returns true if their lengths are non-zero and equal, and their contents are equal.

Returns false if either of the arguments has zero length, or arguments have different lengths, or their contents differ.

System requirements

TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of:

  • Chrome
  • Firefox
  • Safari (Mac, iOS)
  • Internet Explorer 11

Other systems:

  • Node.js

Development and testing

Install NPM modules needed for development:

npm install

To build minified versions:

npm run build

Tests use minified version, so make sure to rebuild it every time you change nacl.js or nacl-fast.js.

Testing

To run tests in Node.js:

npm run test-node

By default all tests described here work on nacl.min.js. To test other versions, set environment variable NACL_SRC to the file name you want to test. For example, the following command will test fast minified version:

$ NACL_SRC=nacl-fast.min.js npm run test-node

To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version:

npm run test-node-all

To prepare tests for browsers:

npm run build-test-browser

and then open test/browser/test.html (or test/browser/test-fast.html) to run them.

To run headless browser tests with tape-run (powered by Electron):

npm run test-browser

(If you get Error: spawn ENOENT, install xvfb: sudo apt-get install xvfb.)

To run tests in both Node and Electron:

npm test

Benchmarking

To run benchmarks in Node.js:

npm run bench
$ NACL_SRC=nacl-fast.min.js npm run bench

To run benchmarks in a browser, open test/benchmark/bench.html (or test/benchmark/bench-fast.html).

Benchmarks

For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android:

nacl.js Intel nacl-fast.js Intel nacl.js ARM nacl-fast.js ARM
salsa20 1.3 MB/s 128 MB/s 0.4 MB/s 43 MB/s
poly1305 13 MB/s 171 MB/s 4 MB/s 52 MB/s
hash 4 MB/s 34 MB/s 0.9 MB/s 12 MB/s
secretbox 1K 1113 op/s 57583 op/s 334 op/s 14227 op/s
box 1K 145 op/s 718 op/s 37 op/s 368 op/s
scalarMult 171 op/s 733 op/s 56 op/s 380 op/s
sign 77 op/s 200 op/s 20 op/s 61 op/s
sign.open 39 op/s 102 op/s 11 op/s 31 op/s

(You can run benchmarks on your devices by clicking on the links at the bottom of the home page).

In short, with nacl-fast.js and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications.

See AUTHORS.md file.

Third-party libraries based on TweetNaCl.js

Who uses it

Some notable users of TweetNaCl.js:



@datastrucures-js/linked-list

build:? npm npm npm

a javascript implementation of LinkedList & DoublyLinkedList.

Linked List Linked List
Doubly Linked List Doubly Linked List


Table of Contents

install

API

require

import

Construction

Example

.insertFirst(value)

inserts a node at the beginning of the list.

params
name type
value object
return decsription
LinkedList LinkedListNode the inserted node
DoublyLinkedList DoublyLinkedListNode
runtime
O(1)

Example

.insertLast(value)

inserts a node at the end of the list.

params
name type
value object
return decsription
LinkedList LinkedListNode the inserted node
DoublyLinkedList DoublyLinkedListNode
runtime
LinkedList O(n)
DoublyLinkedList O(1)

Example

.insertAt(value, position)

inserts a node at specific position of the list. First (head) node is at position 0.

params
name type
value object
position number
return description
LinkedList LinkedListNode the inserted node
DoublyLinkedList DoublyLinkedListNode
runtime
O(n)

Example

.forEach(cb)

Loop on the linked list from beginning to end, and pass each node to the callback.

params
name type
cb function
runtime
O(n)

Example

.forEachReverse(cb)

Only in DoublyLinkedList. Loop on the doubly linked list from end to beginning, and pass each node to the callback.

params
name type
cb function
runtime
O(n)

Example

.find(cb)

returns the first node that returns true from the callback or null if nothing found.

params
name type
cb function
return description
LinkedList LinkedListNode the first found node
DoublyLinkedList DoublyLinkedListNode
runtime
O(n)

Example

.filter(cb)

returns a filtered list of all the nodes that returns true from the callback.

params
name type
cb function
return
LinkedList LinkedListNode
DoublyLinkedList DoublyLinkedListNode
runtime
O(n)

Example

.toArray()

converts the linked list into an array.

return
array
runtime
O(n)

Example

.isEmpty()

checks if the linked list is empty.

return
boolean
runtime
O(1)

Example

returns the head node in the linked list.

return
LinkedList LinkedListNode
DoublyLinkedList DoublyLinkedListNode
runtime
O(1)

Example

.tail()

returns the tail node of the doubly linked list.

return
DoublyLinkedListNode
runtime
O(1)

Example

.count()

returns the number of nodes in the linked list.

return
number
runtime
O(1)

Example

.removeFirst()

removes the first (head) node of the list.

return description
boolean true if a node has been removed
runtime
O(1)

Example

.removeLast()

removes the last node from the list.

return description
boolean true if a node has been removed
runtime
LinkedList O(n)
DoublyLinkedList O(1)

Example

.removeAt(position)

removes a node at a specific position. First (head) node is at position 0.

params
name type
position number
return description
boolean true if a node has been removed
runtime
O(n)

Example

.removeEach(cb)

Loop on the linked list from beginning to end, removes the nodes that returns true from the callback.

params
name type
cb function
return description
number number of removed nodes
runtime
O(n)

Example

.clear()

remove all nodes in the linked list.

runtime
O(1)

Example

LinkedListNode

.getValue()

returns the node’s value.

return
object

.getNext()

returns the next connected node or null if it’s the last node.

return
LinkedListNode

DoublyLinkedListNode

.getValue()

returns the node’s value.

return
object

.getPrev()

returns the previous connected node or null if it’s the first node.

return
DoublyLinkedListNode

.getNext()

returns the next connected node or null if it’s the last node.

return
DoublyLinkedListNode

Build

grunt build


to-regex-range Donate NPM version NPM monthly downloads NPM total downloads Linux Build Status

Pass two numbers, get a regex-compatible source string for matching ranges. Validated against more than 2.78 million test assertions.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

What does this do?


This libary generates the source string to be passed to new RegExp() for matching a range of numbers.

Example

A string is returned so that you can do whatever you need with it before passing it to new RegExp() (like adding ^ or $ boundaries, defining flags, or combining it another string).


Why use this library?


Convenience

Creating regular expressions for matching numbers gets deceptively complicated pretty fast.

For example, let’s say you need a validation regex for matching part of a user-id, postal code, social security number, tax id, etc:

  • regex for matching 1 => /1/ (easy enough)
  • regex for matching 1 through 5 => /[1-5]/ (not bad…)
  • regex for matching 1 or 5 => /(1|5)/ (still easy…)
  • regex for matching 1 through 50 => /([1-9]|[1-4][0-9]|50)/ (uh-oh…)
  • regex for matching 1 through 55 => /([1-9]|[1-4][0-9]|5[0-5])/ (no prob, I can do this…)
  • regex for matching 1 through 555 => /([1-9]|[1-9][0-9]|[1-4][0-9]{2}|5[0-4][0-9]|55[0-5])/ (maybe not…)
  • regex for matching 0001 through 5555 => /(0{3}[1-9]|0{2}[1-9][0-9]|0[1-9][0-9]{2}|[1-4][0-9]{3}|5[0-4][0-9]{2}|55[0-4][0-9]|555[0-5])/ (okay, I get the point!)

The numbers are contrived, but they’re also really basic. In the real world you might need to generate a regex on-the-fly for validation.

Learn more

If you’re interested in learning more about character classes and other regex features, I personally have always found regular-expressions.info to be pretty useful.

Heavily tested

As of April 07, 2019, this library runs >1m test assertions against generated regex-ranges to provide brute-force verification that results are correct.

Tests run in ~280ms on my MacBook Pro, 2.5 GHz Intel Core i7.

Optimized

Generated regular expressions are optimized:

  • duplicate sequences and character classes are reduced using quantifiers
  • smart enough to use ? conditionals when number(s) or range(s) can be positive or negative
  • uses fragment caching to avoid processing the same exact string more than once


Usage

Add this library to your javascript application with the following line of code

The main export is a function that takes two integers: the min value and max value (formatted as strings or numbers).

Options

options.capture

Type: boolean

Deafault: undefined

Wrap the returned value in parentheses when there is more than one regex condition. Useful when you’re dynamically generating ranges.

options.shorthand

Type: boolean

Deafault: undefined

Use the regex shorthand for [0-9]:

options.relaxZeros

Type: boolean

Default: true

This option relaxes matching for leading zeros when when ranges are zero-padded.

When relaxZeros is false, matching is strict:

Examples

Range Result Compile time
toRegexRange(-10, 10) -[1-9]\|-?10\|[0-9] 132μs
toRegexRange(-100, -10) -1[0-9]\|-[2-9][0-9]\|-100 50μs
toRegexRange(-100, 100) -[1-9]\|-?[1-9][0-9]\|-?100\|[0-9] 42μs
toRegexRange(001, 100) 0{0,2}[1-9]\|0?[1-9][0-9]\|100 109μs
toRegexRange(001, 555) 0{0,2}[1-9]\|0?[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] 51μs
toRegexRange(0010, 1000) 0{0,2}1[0-9]\|0{0,2}[2-9][0-9]\|0?[1-9][0-9]{2}\|1000 31μs
toRegexRange(1, 50) [1-9]\|[1-4][0-9]\|50 24μs
toRegexRange(1, 55) [1-9]\|[1-4][0-9]\|5[0-5] 23μs
toRegexRange(1, 555) [1-9]\|[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] 30μs
toRegexRange(1, 5555) [1-9]\|[1-9][0-9]{1,2}\|[1-4][0-9]{3}\|5[0-4][0-9]{2}\|55[0-4][0-9]\|555[0-5] 43μs
toRegexRange(111, 555) 11[1-9]\|1[2-9][0-9]\|[2-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] 38μs
toRegexRange(29, 51) 29\|[34][0-9]\|5[01] 24μs
toRegexRange(31, 877) 3[1-9]\|[4-9][0-9]\|[1-7][0-9]{2}\|8[0-6][0-9]\|87[0-7] 32μs
toRegexRange(5, 5) 5 8μs
toRegexRange(5, 6) 5\|6 11μs
toRegexRange(1, 2) 1\|2 6μs
toRegexRange(1, 5) [1-5] 15μs
toRegexRange(1, 10) [1-9]\|10 22μs
toRegexRange(1, 100) [1-9]\|[1-9][0-9]\|100 25μs
toRegexRange(1, 1000) [1-9]\|[1-9][0-9]{1,2}\|1000 31μs
toRegexRange(1, 10000) [1-9]\|[1-9][0-9]{1,3}\|10000 34μs
toRegexRange(1, 100000) [1-9]\|[1-9][0-9]{1,4}\|100000 36μs
toRegexRange(1, 1000000) [1-9]\|[1-9][0-9]{1,5}\|1000000 42μs
toRegexRange(1, 10000000) [1-9]\|[1-9][0-9]{1,6}\|10000000 42μs

Heads up!

Order of arguments

When the min is larger than the max, values will be flipped to create a valid range:

Is effectively flipped to:

Steps / increments

This library does not support steps (increments). A pr to add support would be welcome.

History

v2.0.0 - 2017-04-21

New features

Adds support for zero-padding!

v1.0.0

Optimizations

Repeating ranges are now grouped using quantifiers. rocessing time is roughly the same, but the generated regex is much smaller, which should result in faster matching.

Attribution

Inspired by the python library range-regex.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • expand-range: Fast, bash-like range expansion. Expand a range of numbers or letters, uppercase or lowercase. Used… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • micromatch: Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | homepage
  • repeat-element: Create an array by repeating the given value n times. | homepage
  • repeat-string: Repeat the given string n times. Fastest implementation for repeating a string. | homepage
Commits Contributor
63 jonschlinkert
3 doowb
2 realityking

Author

Jon Schlinkert

Please consider supporting me on Patreon, or start your own Patreon page!


This file was generated by verb-generate-readme, v0.8.0, on April 07, 2019. # Chokidar Weekly downloads Yearly downloads

A neat wrapper around Node.js fs.watch / fs.watchFile / FSEvents.

NPM

Version 3 is out! Check out our blog post about it: Chokidar 3: How to save 32TB of traffic every week

Why?

Node.js fs.watch:

  • Doesn’t report filenames on MacOS.
  • Doesn’t report events at all when using editors like Sublime on MacOS.
  • Often reports events twice.
  • Emits most changes as rename.
  • Does not provide an easy way to recursively watch file trees.

Node.js fs.watchFile:

  • Almost as bad at event handling.
  • Also does not provide any recursive watching.
  • Results in high CPU utilization.

Chokidar resolves these problems.

Initially made for Brunch (an ultra-swift web app build tool), it is now used in Microsoft’s Visual Studio Code, gulp, karma, PM2, browserify, webpack, BrowserSync, and many others. It has proven itself in production environments.

How?

Chokidar does still rely on the Node.js core fs module, but when using fs.watch and fs.watchFile for watching, it normalizes the events it receives, often checking for truth by getting file stats and/or dir contents.

On MacOS, chokidar by default uses a native extension exposing the Darwin FSEvents API. This provides very efficient recursive watching compared with implementations like kqueue available on most *nix platforms. Chokidar still does have to do some work to normalize the events received that way as well.

On other platforms, the fs.watch-based implementation is the default, which avoids polling and keeps CPU usage down. Be advised that chokidar will initiate watchers recursively for everything within scope of the paths that have been specified, so be judicious about not wasting system resources by watching much more than needed.

Getting started

Install with npm:

Then require and use it in your code:

API

// Example of a more typical implementation structure:

// Initialize watcher.
const watcher = chokidar.watch('file, dir, glob, or array', {
  ignored: /(^|[\/\\])\../, // ignore dotfiles
  persistent: true
});

// Something to use when events are received.
const log = console.log.bind(console);
// Add event listeners.
watcher
  .on('add', path => log(`File ${path} has been added`))
  .on('change', path => log(`File ${path} has been changed`))
  .on('unlink', path => log(`File ${path} has been removed`));

// More possible events.
watcher
  .on('addDir', path => log(`Directory ${path} has been added`))
  .on('unlinkDir', path => log(`Directory ${path} has been removed`))
  .on('error', error => log(`Watcher error: ${error}`))
  .on('ready', () => log('Initial scan complete. Ready for changes'))
  .on('raw', (event, path, details) => { // internal
    log('Raw event info:', event, path, details);
  });

// 'add', 'addDir' and 'change' events also receive stat() results as second
// argument when available: https://nodejs.org/api/fs.html#fs_class_fs_stats
watcher.on('change', (path, stats) => {
  if (stats) console.log(`File ${path} changed size to ${stats.size}`);
});

// Watch new files.
watcher.add('new-file');
watcher.add(['new-file-2', 'new-file-3', '**/other-file*']);

// Get list of actual paths being watched on the filesystem
var watchedPaths = watcher.getWatched();

// Un-watch some files.
await watcher.unwatch('new-file*');

// Stop watching.
// The method is async!
watcher.close().then(() => console.log('closed'));

// Full list of options. See below for descriptions.
// Do not use this example!
chokidar.watch('file', {
  persistent: true,

  ignored: '*.txt',
  ignoreInitial: false,
  followSymlinks: true,
  cwd: '.',
  disableGlobbing: false,

  usePolling: false,
  interval: 100,
  binaryInterval: 300,
  alwaysStat: false,
  depth: 99,
  awaitWriteFinish: {
    stabilityThreshold: 2000,
    pollInterval: 100
  },

  atomic: true // or a custom 'atomicity delay', in milliseconds (default 100)
});

chokidar.watch(paths, [options])

  • paths (string or array of strings). Paths to files, dirs to be watched recursively, or glob patterns.
    • Note: globs must not contain windows separators (\), because that’s how they work by the standard — you’ll need to replace them with forward slashes (/).
    • Note 2: for additional glob documentation, check out low-level library: picomatch.
  • options (object) Options object as defined below:

Persistence

  • persistent (default: true). Indicates whether the process should continue to run as long as files are being watched. If set to false when using fsevents to watch, no more events will be emitted after ready, even if the process continues to run.

Path filtering

  • ignored (anymatch-compatible definition) Defines files/paths to be ignored. The whole relative or absolute path is tested, not just filename. If a function with two arguments is provided, it gets called twice per path - once with a single argument (the path), second time with two arguments (the path and the fs.Stats object of that path).
  • ignoreInitial (default: false). If set to false then add/addDir events are also emitted for matching paths while instantiating the watching as chokidar discovers these file paths (before the ready event).
  • followSymlinks (default: true). When false, only the symlinks themselves will be watched for changes instead of following the link references and bubbling events through the link’s path.
  • cwd (no default). The base directory from which watch paths are to be derived. Paths emitted with events will be relative to this.
  • disableGlobbing (default: false). If set to true then the strings passed to .watch() and .add() are treated as literal path names, even if they look like globs.

Performance

  • usePolling (default: false). Whether to use fs.watchFile (backed by polling), or fs.watch. If polling leads to high CPU utilization, consider setting this to false. It is typically necessary to set this to true to successfully watch files over a network, and it may be necessary to successfully watch files in other non-standard situations. Setting to true explicitly on MacOS overrides the useFsEvents default. You may also set the CHOKIDAR_USEPOLLING env variable to true (1) or false (0) in order to override this option.
  • Polling-specific settings (effective when usePolling: true)
    • interval (default: 100). Interval of file system polling, in milliseconds. You may also set the CHOKIDAR_INTERVAL env variable to override this option.
    • binaryInterval (default: 300). Interval of file system polling for binary files. (see list of binary extensions)
  • useFsEvents (default: true on MacOS). Whether to use the fsevents watching interface if available. When set to true explicitly and fsevents is available this supercedes the usePolling setting. When set to false on MacOS, usePolling: true becomes the default.
  • alwaysStat (default: false). If relying upon the fs.Stats object that may get passed with add, addDir, and change events, set this to true to ensure it is provided even in cases where it wasn’t already available from the underlying watch events.
  • depth (default: undefined). If set, limits how many levels of subdirectories will be traversed.
  • awaitWriteFinish (default: false). By default, the add event will fire when a file first appears on disk, before the entire file has been written. Furthermore, in some cases some change events will be emitted while the file is being written. In some cases, especially when watching for large files there will be a need to wait for the write operation to finish before responding to a file creation or modification. Setting awaitWriteFinish to true (or a truthy value) will poll file size, holding its add and change events until the size does not change for a configurable amount of time. The appropriate duration setting is heavily dependent on the OS and hardware. For accurate detection this parameter should be relatively high, making file watching much less responsive. Use with caution.
    • options.awaitWriteFinish can be set to an object in order to adjust timing params:
    • awaitWriteFinish.stabilityThreshold (default: 2000). Amount of time in milliseconds for a file size to remain constant before emitting its event.
    • awaitWriteFinish.pollInterval (default: 100). File size polling interval, in milliseconds.

Errors

  • atomic (default: true if useFsEvents and usePolling are false). Automatically filters out artifacts that occur when using editors that use “atomic writes” instead of writing directly to the source file. If a file is re-added within 100 ms of being deleted, Chokidar emits a change event rather than unlink then add. If the default of 100 ms does not work well for you, you can override it by setting atomic to a custom value, in milliseconds.

Methods & Events

chokidar.watch() produces an instance of FSWatcher. Methods of FSWatcher:

  • .add(path / paths): Add files, directories, or glob patterns for tracking. Takes an array of strings or just one string.
  • .on(event, callback): Listen for an FS event. Available events: add, addDir, change, unlink, unlinkDir, ready, raw, error. Additionally all is available which gets emitted with the underlying event name and path for every event other than ready, raw, and error. raw is internal, use it carefully.
  • .unwatch(path / paths): Stop watching files, directories, or glob patterns. Takes an array of strings or just one string. Use with await to ensure bugs don’t happen.
  • .close(): async Removes all listeners from watched files. Asynchronous, returns Promise.
  • .getWatched(): Returns an object representing all the paths on the file system being watched by this FSWatcher instance. The object’s keys are all the directories (using absolute paths unless the cwd option was used), and the values are arrays of the names of the items contained in each directory.

CLI

If you need a CLI interface for your file watching, check out chokidar-cli, allowing you to execute a command on each change, or get a stdio stream of change events.

Install Troubleshooting

  • npm WARN optional dep failed, continuing fsevents@n.n.n
    • This message is normal part of how npm handles optional dependencies and is not indicative of a problem. Even if accompanied by other related error messages, Chokidar should function properly.
  • TypeError: fsevents is not a constructor
    • Update chokidar by doing rm -rf node_modules package-lock.json yarn.lock && npm install, or update your dependency that uses chokidar.
  • Chokidar is producing ENOSP error on Linux, like this:
    • bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell Error: watch /home/ ENOSPC
    • This means Chokidar ran out of file handles and you’ll need to increase their count by executing the following command in Terminal: echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

Changelog

Also

Why was chokidar named this way? What’s the meaning behind it?

Chowkidar is a transliteration of a Hindi word meaning ‘watchman, gatekeeper’, चौकीदार. This ultimately comes from Sanskrit _ चतुष्क_ (crossway, quadrangle, consisting-of-four).



jsesc Build status Code coverage status Dependency status

Given some data, jsesc returns a stringified representation of that data. jsesc is similar to JSON.stringify() except:

  1. it outputs JavaScript instead of JSON by default, enabling support for data structures like ES6 maps and sets;
  2. it offers many options to customize the output;
  3. its output is ASCII-safe by default, thanks to its use of escape sequences where needed.

For any input, jsesc generates the shortest possible valid printable-ASCII-only output. Here’s an online demo.

jsesc’s output can be used instead of JSON.stringify’s to avoid mojibake and other encoding issues, or even to avoid errors when passing JSON-formatted data (which may contain U+2028 LINE SEPARATOR, U+2029 PARAGRAPH SEPARATOR, or lone surrogates) to a JavaScript parser or an UTF-8 encoder.

Installation

Via npm:

In Node.js:

API

jsesc(value, options)

This function takes a value and returns an escaped version of the value where any characters that are not printable ASCII symbols are escaped using the shortest possible (but valid) escape sequences for use in JavaScript strings. The first supported value type is strings:

Instead of a string, the value can also be an array, an object, a map, a set, or a buffer. In such cases, jsesc returns a stringified version of the value where any characters that are not printable ASCII symbols are escaped in the same way.

The optional options argument accepts an object with the following options:

quotes

The default value for the quotes option is single. This means that any occurrences of in the input string are escaped as \, so that the output can be used in a string literal wrapped in single quotes.

If you want to use the output as part of a string literal wrapped in double quotes, set the quotes option to 'double'.

If you want to use the output as part of a template literal (i.e. wrapped in backticks), set the quotes option to 'backtick'.

This setting also affects the output for arrays and objects:

numbers

The default value for the numbers option is 'decimal'. This means that any numeric values are represented using decimal integer literals. Other valid options are binary, octal, and hexadecimal, which result in binary integer literals, octal integer literals, and hexadecimal integer literals, respectively.

wrap

The wrap option takes a boolean value (true or false), and defaults to false (disabled). When enabled, the output is a valid JavaScript string literal wrapped in quotes. The type of quotes can be specified through the quotes setting.

es6

The es6 option takes a boolean value (true or false), and defaults to false (disabled). When enabled, any astral Unicode symbols in the input are escaped using ECMAScript 6 Unicode code point escape sequences instead of using separate escape sequences for each surrogate half. If backwards compatibility with ES5 environments is a concern, don’t enable this setting. If the json setting is enabled, the value for the es6 setting is ignored (as if it was false).

escapeEverything

The escapeEverything option takes a boolean value (true or false), and defaults to false (disabled). When enabled, all the symbols in the output are escaped — even printable ASCII symbols.

This setting also affects the output for string literals within arrays and objects.

minimal

The minimal option takes a boolean value (true or false), and defaults to false (disabled). When enabled, only a limited set of symbols in the output are escaped:

  • U+0000 \0
  • U+0008 \b
  • U+0009 \t
  • U+000A \n
  • U+000C \f
  • U+000D \r
  • U+005C \\
  • U+2028 \u2028
  • U+2029 \u2029
  • whatever symbol is being used for wrapping string literals (based on the quotes option)

Note: with this option enabled, jsesc output is no longer guaranteed to be ASCII-safe.

'minimal': false });

isScriptContext

The isScriptContext option takes a boolean value (true or false), and defaults to false (disabled). When enabled, occurrences of </script and </style in the output are escaped as <\/script and <\/style, and <!-- is escaped as \x3C!-- (or \u003C!-- when the json option is enabled). This setting is useful when jsesc’s output ends up as part of a <script> or <style> element in an HTML document.

compact

The compact option takes a boolean value (true or false), and defaults to true (enabled). When enabled, the output for arrays and objects is as compact as possible; it’s not formatted nicely.

This setting has no effect on the output for strings.

indent

The indent option takes a string value, and defaults to '\t'. When the compact setting is enabled (true), the value of the indent option is used to format the output for arrays and objects.

This setting has no effect on the output for strings.

indentLevel

The indentLevel option takes a numeric value, and defaults to 0. It represents the current indentation level, i.e. the number of times the value of the indent option is repeated.

json

The json option takes a boolean value (true or false), and defaults to false (disabled). When enabled, the output is valid JSON. Hexadecimal character escape sequences and the \v or \0 escape sequences are not used. Setting json: true implies quotes: 'double', wrap: true, es6: false, although these values can still be overridden if needed — but in such cases, the output won’t be valid JSON anymore.

Note: Using this option on objects or arrays that contain non-string values relies on JSON.stringify(). For legacy environments like IE ≤ 7, use a JSON polyfill.

lowercaseHex

The lowercaseHex option takes a boolean value (true or false), and defaults to false (disabled). When enabled, any alphabetical hexadecimal digits in escape sequences as well as any hexadecimal integer literals (see the numbers option) in the output are in lowercase.

jsesc.version

A string representing the semantic version number.

Using the jsesc binary

To use the jsesc binary in your shell, simply install jsesc globally using npm:

After that you’re able to escape strings from the command line:

To escape arrays or objects containing string values, use the -o/--object option:

To prettify the output in such cases, use the -p/--pretty option:

For valid JSON output, use the -j/--json option:

Read a local JSON file, escape any non-ASCII symbols, and save the result to a new file:

Or do the same with an online JSON file:

See jsesc --help for the full list of options.

As of v2.0.0, jsesc supports Node.js v4+ only.

Older versions (up to jsesc v1.3.0) support Chrome 27, Firefox 3, Safari 4, Opera 10, IE 6, Node.js v6.0.0, Narwhal 0.3.2, RingoJS 0.8-0.11, PhantomJS 1.9.0, and Rhino 1.7RC4. Note: Using the json option on objects or arrays that contain non-string values relies on JSON.parse(). For legacy environments like IE ≤ 7, use a JSON polyfill.

Author

twitter/mathias
Mathias Bynens


qs Version Badge

npm badge

A querystring parsing and stringifying library with some added security.

Lead Maintainer: Jordan Harband

The qs module was originally created and maintained by TJ Holowaychuk.

Usage

Parsing Objects

qs allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets []. For example, the string 'foo[bar]=baz' converts to:

When using the plainObjects option the parsed value is returned as a null object, created via Object.create(null) and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:

By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.

URI encoded strings work too:

You can also nest your objects, like 'foo[bar][baz]=foobarbaz':

By default, when nesting objects qs will only parse up to 5 children deep. This means if you attempt to parse a string like 'a[b][c][d][e][f][g][h][i]=j' your resulting object will be:

This depth can be overridden by passing a depth option to qs.parse(string, [options]):

The depth limit helps mitigate abuse when qs is used to parse user input, and it is recommended to keep it a reasonably small number.

For similar reasons, by default qs will only parse up to 1000 parameters. This can be overridden by passing a parameterLimit option:

To bypass the leading question mark, use ignoreQueryPrefix:

An optional delimiter can also be passed:

Delimiters can be a regular expression too:

Option allowDots can be used to enable dot notation:

Parsing Arrays

qs can also parse arrays using a similar [] notation:

You may specify an index as well:

Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, qs will compact a sparse array to only the existing values preserving their order:

Note that an empty string is also a value, and will be preserved:

qs will also limit specifying indices in an array to a maximum index of 20. Any array members with an index of greater than 20 will instead be converted to an object with the index as the key:

This limit can be overridden by passing an arrayLimit option:

To disable array parsing entirely, set parseArrays to false.

If you mix notations, qs will merge the two items into an object:

You can also create arrays of objects:

Stringifying

When stringifying, qs by default URI encodes output. Objects are stringified as you would expect:

This encoding can be disabled by setting the encode option to false:

Encoding can be disabled for keys by setting the encodeValuesOnly option to true:

This encoding can also be replaced by a custom encoding method set as encoder option:

(Note: the encoder option does not apply if encode is false)

Analogue to the encoder there is a decoder option for parse to override decoding of properties and values:

Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases will be URI encoded during real usage.

When arrays are stringified, by default they are given explicit indices:

You may override this by setting the indices option to false:

You may use the arrayFormat option to specify the format of the output array:

When objects are stringified, by default they use bracket notation:

You may override this to use dot notation by setting the allowDots option to true:

Empty strings and null values will omit the value, but the equals sign (=) remains in place:

Key with no values (such as an empty object or array) will return nothing:

Properties that are set to undefined will be omitted entirely:

The query string may optionally be prepended with a question mark:

The delimiter may be overridden with stringify as well:

If you only want to override the serialization of Date objects, you can provide a serializeDate option:

You may use the sort option to affect the order of parameter keys:

Finally, you can use the filter option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification:

Handling of null values

By default, null values are treated like empty strings:

Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.

To distinguish between null values and empty strings use the strictNullHandling flag. In the result string the null values have no = sign:

To parse values without = back to null use the strictNullHandling flag:

To completely skip rendering keys with null values, use the skipNulls flag:

Dealing with special character sets

By default the encoding and decoding of characters is done in utf-8. If you wish to encode querystrings to a different character set (i.e. Shift JIS) you can use the qs-iconv library:

This also works for decoding of query strings:

RFC 3986 and RFC 1738 space encoding

RFC3986 used as default option and encodes ’ ’ to %20 which is backward compatible. In the same time, output can be stringified as per RFC1738 with ’ ’ equal to ‘+’.

assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');


Glob

Match files using the patterns the shell uses, like stars and stuff.

Build Status Build Status Coverage Status

This is a glob implementation in JavaScript. It uses the minimatch library to do its matching.

Usage

Install with npm

npm i glob

Glob Primer

“Globs” are the patterns you type when you do stuff like ls *.js on the command line, or put build/* in a .gitignore file.

Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with { and end with }, with any number of comma-delimited sections within. Braced sections may contain slash characters, so a{/b/c,bcd} would expand into a/b/c and abcd.

The following characters have special magic meaning when used in a path portion:

  • * Matches 0 or more characters in a single path portion
  • ? Matches 1 character
  • [...] Matches a range of characters, similar to a RegExp range. If the first character of the range is ! or ^ then it matches any character not in the range.
  • !(pattern|pattern|pattern) Matches anything that does not match any of the patterns provided.
  • ?(pattern|pattern|pattern) Matches zero or one occurrence of the patterns provided.
  • +(pattern|pattern|pattern) Matches one or more occurrences of the patterns provided.
  • *(a|b|c) Matches zero or more occurrences of the patterns provided
  • @(pattern|pat*|pat?erN) Matches exactly one of the patterns provided
  • ** If a “globstar” is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories.

Dots

If a file or directory path portion has a . as the first character, then it will not match any glob pattern unless that pattern’s corresponding path part also has a . as its first character.

For example, the pattern a/.*/c would match the file at a/.b/c. However the pattern a/*/c would not, because * does not start with a dot character.

You can make glob treat dots as normal characters by setting dot:true in the options.

Basename Matching

If you set matchBase:true in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, *.js would match test/simple/basic.js.

Empty Sets

If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example:

$ echo asdf asdf

To get the bash-style behavior, set the nonull:true in the options.

See Also:

glob.hasMagic(pattern, options)

Returns true if there are any special characters in the pattern, and false otherwise.

Note that the options affect the results. If noext:true is set in the options object, then +(a|b) will not be considered a magic pattern. If the pattern has a brace expansion, like a/{b/c,x/y} then that is considered magical, unless nobrace:true is set in the options.

glob(pattern, options, cb)

  • pattern {String} Pattern to be matched
  • options {Object}
  • cb {Function}
    • err {Error | null}
    • matches {Array<String>} filenames found matching the pattern

Perform an asynchronous glob search.

glob.sync(pattern, options)

  • pattern {String} Pattern to be matched
  • options {Object}
  • return: {Array<String>} filenames found matching the pattern

Perform a synchronous glob search.

Class: glob.Glob

Create a Glob object by instantiating the glob.Glob class.

It’s an EventEmitter, and starts walking the filesystem to find matches immediately.

new glob.Glob(pattern, options, cb)

  • pattern {String} pattern to search for
  • options {Object}
  • cb {Function} Called when an error occurs, or matches are found
    • err {Error | null}
    • matches {Array<String>} filenames found matching the pattern

Note that if the sync flag is set in the options, then matches will be immediately available on the g.found member.

Properties

  • minimatch The minimatch object that the glob uses.
  • options The options object passed in.
  • aborted Boolean which is set to true when calling abort(). There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls.
  • cache Convenience object. Each field has the following possible values:
    • false - Path does not exist
    • true - Path exists
    • 'FILE' - Path exists, and is not a directory
    • 'DIR' - Path exists, and is a directory
    • [file, entries, ...] - Path exists, is a directory, and the array value is the results of fs.readdir
  • statCache Cache of fs.stat results, to prevent statting the same path multiple times.
  • symlinks A record of which paths are symbolic links, which is relevant in resolving ** patterns.
  • realpathCache An optional object which is passed to fs.realpath to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used.

Events

  • end When the matching is finished, this is emitted with all the matches found. If the nonull option is set, and no match was found, then the matches list contains the original pattern. The matches are sorted, unless the nosort flag is set.
  • match Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath.
  • error Emitted when an unexpected error is encountered, or whenever any fs error occurs if options.strict is set.
  • abort When abort() is called, this event is raised.

Methods

  • pause Temporarily stop the search
  • resume Resume the search
  • abort Stop the search forever

Options

All the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications.

All options are false by default, unless otherwise noted.

All options are added to the Glob object, as well.

If you are running many glob operations, you can pass a Glob object as the options argument to a subsequent operation to shortcut some stat and readdir calls. At the very least, you may pass in shared symlinks, statCache, realpathCache, and cache options, so that parallel glob operations will be sped up by sharing information about the filesystem.

  • cwd The current working directory in which to search. Defaults to process.cwd().
  • root The place where patterns starting with / will be mounted onto. Defaults to path.resolve(options.cwd, "/") (/ on Unix systems, and C:\ or some such on Windows.)
  • dot Include .dot files in normal matches and globstar matches. Note that an explicit dot in a portion of the pattern will always match dot files.
  • nomount By default, a pattern starting with a forward-slash will be “mounted” onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior.
  • mark Add a / character to directory matches. Note that this requires additional stat calls.
  • nosort Don’t sort the results.
  • stat Set to true to stat all results. This reduces performance somewhat, and is completely unnecessary, unless readdir is presumed to be an untrustworthy indicator of file existence.
  • silent When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the silent option to true to suppress these warnings.
  • strict When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the strict option to raise an error in these cases.
  • cache See cache property above. Pass in a previously generated cache object to save some fs calls.
  • statCache A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See “Race Conditions” below.)
  • symlinks A cache of known symbolic links. You may pass in a previously generated symlinks object to save lstat calls when resolving ** matches.
  • sync DEPRECATED: use glob.sync(pattern, opts) instead.
  • nounique In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior.
  • nonull Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3).
  • debug Set to enable debug logging in minimatch and glob.
  • nobrace Do not expand {a,b} and {1..3} brace sets.
  • noglobstar Do not match ** against multiple filenames. (Ie, treat it as a normal * instead.)
  • noext Do not match +(a|b) “extglob” patterns.
  • nocase Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since stat and readdir will not raise errors.
  • matchBase Perform a basename-only match if the pattern does not contain any slash characters. That is, *.js would be treated as equivalent to **/*.js, matching all js files in all directories.
  • nodir Do not match directories, only files. (Note: to match only directories, simply put a / at the end of the pattern.)
  • ignore Add a pattern or an array of glob patterns to exclude matches. Note: ignore patterns are always in dot:true mode, regardless of any other settings.
  • follow Follow symlinked directories when expanding ** patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links.
  • realpath Set to true to call fs.realpath on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink)
  • absolute Set to true to always receive absolute paths for matched files. Unlike realpath, this also affects the values returned in the match event.

Comparisons to other fnmatch/glob implementations

While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional.

The double-star character ** is supported by default, unless the noglobstar flag is set. This is supported in the manner of bsdglob and bash 4.3, where ** only has special significance if it is the only thing in a path part. That is, a/**/b will match a/x/y/b, but a/**b will not.

If an escaped pattern has no matches, and the nonull flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, glob.match([], "\\*a\\?") will return "\\*a\\?" rather than "*a?". This is akin to setting the nullglob option in bash, except that it does not resolve escaped pattern characters.

If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like +(a|{b),c)}, which would not be valid in bash or zsh, is expanded first into the set of +(a|b) and +(a|c), and those patterns are checked for validity. Since those two are valid, matching proceeds.

Comments and Negation

Previously, this module let you mark a pattern as a “comment” if it started with a # character, or a “negated” pattern if it started with a ! character.

These options were deprecated in version 5, and removed in version 6.

To specify things that should not match, use the ignore option.

Windows

Please only use forward-slashes in glob expressions.

Though windows uses either / or \ as its path separator, only / characters are used by this glob implementation. You must use forward-slashes only in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators.

Results from absolute patterns such as /foo/* are mounted onto the root setting using path.join. On windows, this will by default result in /foo/* matching C:\foo\bar.txt.

Race Conditions

Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such.

As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result.

As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls.

Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem.

Glob’s logo was created by Tanya Brassie. Logo files can be found here.

Contributing

Any change to behavior (including bugfixes) must come with a test.

Patches that fail tests or reduce performance will be rejected.

# to run tests
npm test

# to re-generate test fixtures
npm run test-regen

# to benchmark against bash/zsh
npm run bench

# to profile javascript
npm run prof



Optionator

Optionator is a JavaScript/Node.js option parsing and help generation library used by eslint, Grasp, LiveScript, esmangle, escodegen, and many more.

For an online demo, check out the Grasp online demo.

About · Usage · Settings Format · Argument Format

Why?

The problem with other option parsers, such as yargs or minimist, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application.

$ cmd –halp Invalid option ‘–halp’ - perhaps you meant ‘–help’?

$ cmd –count str Invalid value for option ‘count’ - expected type Int, received value: str.

Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier.

About

Optionator uses type-check and levn behind the scenes to cast and verify input according the specified types.

npm install optionator

For updates on Optionator, follow me on twitter.

Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify.

Usage

require('optionator'); returns a function. It has one property, VERSION, the current version of the library as a string. This function is called with an object specifying your options and other information, see the settings format section. This in turn returns an object with three properties, parse, parseArgv, generateHelp, and generateHelpForOption, which are all functions.

parse(input, parseOptions)

parse processes the input according to your settings, and returns an object with the results.

arguments
  • input - [String] | Object | String - the input you wish to parse
  • parseOptions - {slice: Int} - all options optional
    • slice specifies how much to slice away from the beginning if the input is an array or string - by default 0 for string, 2 for array (works with process.argv)
returns

Object - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the _ key.

example

parseArgv(input)

parseArgv works exactly like parse, but only for array input and it slices off the first two elements.

arguments
  • input - [String] - the input you wish to parse
returns

See “returns” section in “parse”

example

generateHelp(helpOptions)

generateHelp produces help text based on your settings.

arguments
  • helpOptions - {showHidden: Boolean, interpolate: Object} - all options optional
    • showHidden specifies whether to show options with hidden: true specified, by default it is false
    • interpolate specify data to be interpolated in prepend and append text, {{key}} is the format - eg. generateHelp({interpolate:{version: '0.4.2'}}), will change this append text: Version {{version}} to Version 0.4.2
returns

String - the generated help text

example

generateHelpForOption(optionName)

generateHelpForOption produces expanded help text for the specified with optionName option. If an example was specified for the option, it will be displayed, and if a longDescription was specified, it will display that instead of the description.

arguments
  • optionName - String - the name of the option to display
returns

String - the generated help text for the option

example

Settings Format

When your require('optionator'), you get a function that takes in a settings object. This object has the type:

{ prepend: String, append: String, options: [{heading: String} | { option: String, alias: String | String, type: String, enum: String, default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: String | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: String | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | String]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object }

All of the properties are optional (the Maybe has been excluded for brevities sake), except for having either heading: String or option: String in each object in the options array.

Top Level Properties

  • prepend is an optional string to be placed before the options in the help text
  • append is an optional string to be placed after the options in the help text
  • options is a required array specifying your options and headings, the options and headings will be displayed in the order specified
  • helpStyle is an optional object which enables you to change the default appearance of some aspects of the help text
  • mutuallyExclusive is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is present
  • concatRepeatedArrays see description under the “Option Properties” heading - use at the top level is deprecated, if you want to set this for all options, use the defaults property
  • mergeRepeatedObjects see description under the “Option Properties” heading - use at the top level is deprecated, if you want to set this for all options, use the defaults property
  • positionalAnywhere is an optional boolean (defaults to true) - when true it allows positional arguments anywhere, when false, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with positionalAnywhere: false, the arguments --flag --boom 12 --crack would have two positional arguments: 12 and --crack
  • typeAliases is an optional object, it allows you to set aliases for types, eg. {Path: 'String'} would allow you to use the type Path as an alias for the type String
  • defaults is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do default: { type: "String" } to set the default type of all options to String, and then override that default in an individual option by setting the type property

Heading Properties

  • heading a required string, the name of the heading

Option Properties

  • option the required name of the option - use dash-case, without the leading dashes
  • alias is an optional string or array of strings which specify any aliases for the option
  • type is a required string in the type check format, this will be used to cast the inputted value and validate it
  • enum is an optional array of strings, each string will be parsed by levn - the argument value must be one of the resulting values - each potential value must validate against the specified type
  • default is a optional string, which will be parsed by levn and used as the default value if none is set - the value must validate against the specified type
  • restPositional is an optional boolean - if set to true, everything after the option will be taken to be a positional argument, even if it looks like a named argument
  • required is an optional boolean - if set to true, the option parsing will fail if the option is not defined
  • overrideRequired is a optional boolean - if set to true and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use --help or --version flags
  • concatRepeatedArrays is an optional boolean or tuple with boolean and options object (defaults to false) - when set to true and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option g of type [String]: -g a -g b -g c,d will result in ['a','b','c','d']

You can supply an options object by giving the following value: [true, options]. The one currently supported option is oneValuePerFlag, this only allows one array value per flag. This is useful if your potential values contain a comma. * mergeRepeatedObjects is an optional boolean (defaults to false) - when set to true and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option g of type Object: -g a:1 -g b:2 -g c:3,d:4 will result in {a: 1, b: 2, c: 3, d: 4} * dependsOn is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether 'and' or 'or' is first, it will either check whether all (['and', 'option-a', 'option-b']), or at least one (['or', 'option-a', 'option-b']) other options are set * description is an optional string, which will be displayed next to the option in the help text * longDescription is an optional string, it will be displayed instead of the description when generateHelpForOption is used * example is an optional string or array of strings with example(s) for the option - these will be displayed when generateHelpForOption is used

Help Style Properties

  • aliasSeparator is an optional string, separates multiple names from each other - default: ’ ,’
  • typeSeparator is an optional string, separates the type from the names - default: ’ ’
  • descriptionSeparator is an optional string , separates the description from the padded name and type - default: ’ ’
  • initialIndent is an optional int - the amount of indent for options - default: 2
  • secondaryIndent is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4
  • maxPadFactor is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5

Argument Format

At the highest level there are two types of arguments: named, and positional.

Name arguments of any length are prefixed with -- (eg. --go), and those of one character may be prefixed with either -- or - (eg. -g).

There are two types of named arguments: boolean flags (eg. --problemo, -p) which take no value and result in a true if they are present, the falsey undefined if they are not present, or false if present and explicitly prefixed with no (eg. --no-problemo). Named arguments with values (eg. --tseries 800, -t 800) are the other type. If the option has a type Boolean it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value.

For more information about how to properly set types to get the value you want, take a look at the type check and levn pages.

You can group single character arguments that use a single -, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. -ba 2 is equivalent to -b -a 2.

Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in cmd -b one -a 2 two where b is a boolean flag, and a has the type Number, there are two positional arguments, one and two.

Everything after an -- is positional, even if it looks like a named argument.

You may optionally use = to separate option names from values, for example: --count=2.

If you specify the option NUM, then any argument using a single - followed by a number will be valid and will set the value of NUM. Eg. -2 will be parsed into NUM: 2.

If duplicate named arguments are present, the last one will be taken.

Technical About

optionator is written in LiveScript - a language that compiles to JavaScript. It uses levn to cast arguments to their specified type, and uses type-check to validate values. It also uses the prelude.ls library.

Fastest full PostgreSQL nodejs client


Getting started


Good UX with Postgres.js

Install

Use

Connection options postgres([url], [options])

You can use either a postgres:// url connection string or the options to define your database connection properties. Options in the object will override any present in the url.

More info for the ssl option can be found in the Node.js docs for tls connect options

Query sql` ` -> Promise

A query will always return a Promise which resolves to a results array [...]{ rows, command }. Destructuring is great to immediately access the first element.

Query parameters

Parameters are automatically inferred and handled by Postgres so that SQL injection isn’t possible. No special handling is necessary, simply use JS tagged template literals as usual.

Stream sql` `.stream(fn) -> Promise

If you want to handle rows returned by a query one by one, you can use .stream which returns a promise that resolves once there are no more rows.

Listen and notify

When you call listen, a dedicated connection will automatically be made to ensure that you receive notifications in real time. This connection will be used for any further calls to listen. Listen returns a promise which resolves once the LISTEN query to Postgres completes, or if there is already a listener active.

Notify can be done as usual in sql, or by using the sql.notify method.

Tagged template function sql``

Tagged template functions are not just ordinary template literal strings. They allow the function to handle any parameters within before interpolation. This means that they can be used to enforce a safe way of writing queries, which is what Postgres.js does. Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholders $1, $2, ... and then sent to the database as a parameter to let it handle any need for escaping / casting.

This also means you cannot write dynamic queryes or concat queries together by simple string manipulation. To enable dynamic queries in a safe way, the sql function doubles as a regular function which escapes any value properly. It also includes overloads for common cases of inserting, selecting, updating and querying.

Dynamic query helpers sql() inside tagged template

Postgres.js has a safe, ergonomic way to aid you in writing queries. This makes it easier to write dynamic inserts, selects, updates and where queries.

Insert

You can leave out the column names and simply do sql(user) if you want to get all fields from the object as columns, but be careful not to allow users to supply columns you don’t want.

Multiple inserts in one query

If you need to insert multiple rows at the same time it’s also much faster to do it with a single insert. Simply pass an array of objects to sql().

Update

This is also useful for update queries

Select

Arrays sql.array(Array)

PostgreSQL has a native array type which is similar to js arrays, but only allows the same type and shape for nested items. This method automatically infers the item type and serializes js arrays into PostgreSQL arrays.

JSON sql.json(object)

File query sql.file(path, [args], [options]) -> Promise

Using an .sql file for a query. The contents will be cached in memory so that the file is only read once.

Transactions

SAVEPOINT sql.savepoint([name], fn) -> Promise

Do note that you can often achieve the same result using WITH queries (Common Table Expressions) instead of using transactions.

Types

You can add ergonomic support for custom types, or simply pass an object with a { type, value } signature that contains the Postgres oid for the type and the correctly serialized value.

Adding Query helpers is the recommended approach which can be done like this:

Teardown / Cleanup

To ensure proper teardown and cleanup on server restarts use sql.end({ timeout: null }) before process.exit()

Calling sql.end() will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a timeout is provided any pending queries will be rejected once the timeout is reached and the connections will be destroyed.

Sample shutdown using Prexit

The Connection Pool

Connections are created lazily once a query is created. This means that simply doing const sql = postgres(...) won’t have any effect other than instantiating a new sql instance.

No connection will be made until a query is made.

This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using sql.begin(), and automatically returned to the pool once your transaction is done.

Any query which was already sent over the wire will be rejected if the connection is lost. It’ll automatically defer to the error handling you have for that query, and since connections are lazy it’ll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic “onerror” handler that tries to get things back to normal, and also simpler application code since you don’t have to handle errors out of context.

There are no guarantees about queries executing in order unless using a transaction with sql.begin() or setting max: 1. Of course doing a series of queries, one awaiting the other will work as expected, but that’s just due to the nature of js async/promise handling, so it’s not necessary for this library to be concerned with ordering.

sql.unsafe - Advanced unsafe use cases

Unsafe queries sql.unsafe(query, [args], [options]) -> promise

If you know what you’re doing, you can use unsafe to pass any string you’d like to postgres. Please note that this can lead to sql injection if you’re not careful.

Errors

Errors are all thrown to related queries and never globally. Errors coming from PostgreSQL itself are always in the native Postgres format, and the same goes for any Node.js errors eg. coming from the underlying connection.

There are also the following errors specifically for this library.

MESSAGE_NOT_SUPPORTED

X (X) is not supported

Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.

MAX_PARAMETERS_EXCEEDED

Max number of parameters (65534) exceeded

The postgres protocol doesn’t allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using sql([...]) to escape values instead of passing them as parameters.

SASL_SIGNATURE_MISMATCH

Message type X not supported

When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid man in the middle attacks. If you receive this error the connection was cancelled because the server did not reply with the expected signature.

NOT_TAGGED_CALL

Query not called as a tagged template literal

Making queries has to be done using the sql function as a tagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.

AUTH_TYPE_NOT_IMPLEMENTED

Auth type X not implemented

Postgres supports many different authentication types. This one is not supported.

CONNECTION_CLOSED

write CONNECTION_CLOSED host:port

This error is thrown if the connection was closed without an error. This should not happen during normal operation, so please create an issue if this was unexpected.

CONNECTION_ENDED

write CONNECTION_ENDED host:port

This error is thrown if the user has called sql.end() and performed a query afterwards.

CONNECTION_DESTROYED

write CONNECTION_DESTROYED host:port

This error is thrown for any queries that were pending when the timeout to sql.end({ timeout: X }) was reached.

Thank you

A really big thank you to [@JAForbes](https://twitter.com/jmsfbs) who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.

Thanks to [@ACXgit](https://twitter.com/andreacoiutti) for initial tests and dogfooding.

Also thanks to Ryan Dahl for letting me have the postgres npm package name.



Enhanced fs.readdir()

:warning: This is «fork» for original readdir-enhanced package but with some monkey fixes.

Build Status Windows Build Status

Coverage Status Codacy Score Inline docs Dependencies

readdir-enhanced is a backward-compatible drop-in replacement for fs.readdir() and fs.readdirSync() with tons of extra features (filtering, recursion, absolute paths, stats, and more) as well as additional APIs for Promises, Streams, and EventEmitters.

Pick Your API

readdir-enhanced has multiple APIs, so you can pick whichever one you prefer. There are three main APIs:

  • Synchronous API
    aliases: readdir.sync, readdir.readdirSync
    Blocks the thread until all directory contents are read, and then returns all the results.

  • Streaming API
    aliases: readdir.stream, readdir.readdirStream
    The streaming API reads the starting directory asynchronously and returns the results in real-time as they are read. The results can be piped to other Node.js streams, or you can listen for specific events via the EventEmitter interface. (see example below)

Enhanced Features —————– readdir-enhanced adds several features to the built-in fs.readdir() function. All of the enhanced features are opt-in, which makes readdir-enhanced fully backward compatible by default. You can enable any of the features by passing-in an options argument as the second parameter.

### Recursion By default, readdir-enhanced will only return the top-level contents of the starting directory. But you can set the deep option to recursively traverse the subdirectories and return their contents as well.

Crawl ALL subdirectories

The deep option can be set to true to traverse the entire directory structure.

Crawl to a specific depth

The deep option can be set to a number to only traverse that many levels deep. For example, calling readdir('my/directory', {deep: 2}) will return subdir1/file.txt and subdir1/subdir2/file.txt, but it won’t return subdir1/subdir2/subdir3/file.txt.

Crawl subdirectories by name

For simple use-cases, you can use a regular expression or a glob pattern to crawl only the directories whose path matches the pattern. The path is relative to the starting directory by default, but you can customize this via options.basePath.

NOTE: Glob patterns always use forward-slashes, even on Windows. This does not apply to regular expressions though. Regular expressions should use the appropraite path separator for the environment. Or, you can match both types of separators using [\\/].

Custom recursion logic

For more advanced recursion, you can set the deep option to a function that accepts an fs.Stats object and returns a truthy value if the starting directory should be crawled.

NOTE: The fs.Stats object that’s passed to the function has additional path and depth properties. The path is relative to the starting directory by default, but you can customize this via options.basePath. The depth is the number of subdirectories beneath the base path (see options.deep).

### Filtering The filter option lets you limit the results based on any criteria you want.

Filter by name

For simple use-cases, you can use a regular expression or a glob pattern to filter items by their path. The path is relative to the starting directory by default, but you can customize this via options.basePath.

NOTE: Glob patterns always use forward-slashes, even on Windows. This does not apply to regular expressions though. Regular expressions should use the appropraite path separator for the environment. Or, you can match both types of separators using [\\/].

Custom filtering logic

For more advanced filtering, you can specify a filter function that accepts an fs.Stats object and returns a truthy value if the item should be included in the results.

NOTE: The fs.Stats object that’s passed to the filter function has additional path and depth properties. The path is relative to the starting directory by default, but you can customize this via options.basePath. The depth is the number of subdirectories beneath the base path (see options.deep).

### Base Path By default all readdir-enhanced functions return paths that are relative to the starting directory. But you can use the basePath option to customize this. The basePath will be prepended to all of the returned paths. One common use-case for this is to set basePath to the absolute path of the starting directory, so that all of the returned paths will be absolute.

### Path Separator By default, readdir-enhanced uses the correct path separator for your OS (\ on Windows, / on Linux & MacOS). But you can set the sep option to any separator character(s) that you want to use instead. This is usually used to ensure consistent path separators across different OSes.

### Custom FS methods By default, readdir-enhanced uses the default Node.js FileSystem module for methods like fs.stat, fs.readdir and fs.lstat. But in some situations, you can want to use your own FS methods (FTP, SSH, remote drive and etc). So you can provide your own implementation of FS methods by setting options.fs or specific methods, such as options.fs.stat.

Get fs.Stats objects instead of strings ———————— All of the readdir-enhanced functions listed above return an array of strings (paths). But in some situations, the path isn’t enough information. So, readdir-enhanced provides alternative versions of each function, which return an array of fs.Stats objects instead of strings. The fs.Stats object contains all sorts of useful information, such as the size, the creation date/time, and helper methods such as isFile(), isDirectory(), isSymbolicLink(), etc.

NOTE: The fs.Stats objects that are returned also have additional path and depth properties. The path is relative to the starting directory by default, but you can customize this via options.basePath. The depth is the number of subdirectories beneath the base path (see options.deep).

To get fs.Stats objects instead of strings, just add the word “Stat” to the function name. As with the normal functions, each one is aliased (e.g. readdir.async.stat is the same as readdir.readdirAsyncStat), so you can use whichever naming style you prefer.

Backward Compatible ——————– readdir-enhanced is fully backward-compatible with Node.js’ built-in fs.readdir() and fs.readdirSync() functions, so you can use it as a drop-in replacement in existing projects without affecting existing functionality, while still being able to use the enhanced features as needed.

Contributing

I welcome any contributions, enhancements, and bug-fixes. File an issue on GitHub and submit a pull request.

Building

To build the project locally on your computer:

  1. Clone this repo
    git clone https://github.com/bigstickcarpet/readdir-enhanced.git

  2. Install dependencies
    npm install

  3. Run the tests
    npm test



semver(1) – The semantic versioner for npm

Install

Usage

As a node module:

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero digit in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0
  • ^0.2.3 := >=0.2.3 <0.3.0
  • ^0.0.3 := >=0.0.3 <0.0.4
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0
  • ^0.0.x := >=0.0.0 <0.1.0
  • ^0.0 := >=0.0.0 <0.1.0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0
  • ^0.x := >=0.0.0 <1.0.0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).



semver(1) – The semantic versioner for npm

Install

Usage

As a node module:

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero digit in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0
  • ^0.2.3 := >=0.2.3 <0.3.0
  • ^0.0.3 := >=0.0.3 <0.0.4
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0
  • ^0.0.x := >=0.0.0 <0.1.0
  • ^0.0 := >=0.0.0 <0.1.0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0
  • ^0.x := >=0.0.0 <1.0.0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).



buffer travis npm downloads

The buffer module from node.js, for the browser.

saucelabs

With browserify, simply require('buffer') or use the Buffer global and you will get this module.

The goal is to provide an API that is 100% identical to node’s Buffer API. Read the official docs for the full list of properties, instance methods, and class methods that are supported.

features

  • Manipulate binary data like a boss, in all browsers – even IE6!
  • Super fast. Backed by Typed Arrays (Uint8Array/ArrayBuffer, not Object)
  • Extremely small bundle size (5.04KB minified + gzipped, 35.5KB with comments)
  • Excellent browser support (IE 6+, Chrome 4+, Firefox 3+, Safari 5.1+, Opera 11+, iOS, etc.)
  • Preserves Node API exactly, with one minor difference (see below)
  • Square-bracket buf[4] notation works, even in old browsers like IE6!
  • Does not modify any browser prototypes or put anything on window
  • Comprehensive test suite (including all buffer tests from node.js core)

install

To use this module directly (without browserify), install it:

This module was previously called native-buffer-browserify, but please use buffer from now on.

A standalone bundle is available here, for non-browserify users.

usage

The module’s API is identical to node’s Buffer API. Read the official docs for the full list of properties, instance methods, and class methods that are supported.

As mentioned above, require('buffer') or use the Buffer global with browserify and this module will automatically be included in your bundle. Almost any npm module will work in the browser, even if it assumes that the node Buffer API will be available.

To depend on this module explicitly (without browserify), require it like this:

To require this module explicitly, use require('buffer/') which tells the node.js module lookup algorithm (also used by browserify) to use the npm module named buffer instead of the node.js core module named buffer!

how does it work?

The Buffer constructor returns instances of Uint8Array that have their prototype changed to Buffer.prototype. Furthermore, Buffer is a subclass of Uint8Array, so the returned instances will have all the node Buffer methods and the Uint8Array methods. Square bracket notation works as expected – it returns a single octet.

The Uint8Array prototype remains unmodified.

one minor difference

In old browsers, buf.slice() does not modify parent buffer’s memory

If you only support modern browsers (specifically, those with typed array support), then this issue does not affect you. If you support super old browsers, then read on.

In node, the slice() method returns a new Buffer that shares underlying memory with the original Buffer. When you modify one buffer, you modify the other. Read more.

In browsers with typed array support, this Buffer implementation supports this behavior. In browsers without typed arrays, an alternate buffer implementation is used that is based on Object which has no mechanism to point separate Buffers to the same underlying slab of memory.

You can see which browser versions lack typed array support here.

tracking the latest node api

This module tracks the Buffer API in the latest (unstable) version of node.js. The Buffer API is considered stable in the node stability index, so it is unlikely that there will ever be breaking changes. Nonetheless, when/if the Buffer API changes in node, this module’s API will change accordingly.

  • buffer-equals - Node.js 0.12 buffer.equals() ponyfill
  • buffer-reverse - A lite module for reverse-operations on buffers
  • buffer-xor - A simple module for bitwise-xor on buffers
  • is-buffer - Determine if an object is a Buffer without including the whole Buffer package
  • typedarray-to-buffer - Convert a typed array to a Buffer without a copy

performance

See perf tests in /perf.

BrowserBuffer is the browser buffer module (this repo). Uint8Array is included as a sanity check (since BrowserBuffer uses Uint8Array under the hood, Uint8Array will always be at least a bit faster). Finally, NodeBuffer is the node.js buffer module, which is included to compare against.

NOTE: Performance has improved since these benchmarks were taken. PR welcoem to update the README.

Chrome 38

Method Operations Accuracy Sampled Fastest
BrowserBuffer#bracket-notation 11,457,464 ops/sec ±0.86% 66
Uint8Array#bracket-notation 10,824,332 ops/sec ±0.74% 65
BrowserBuffer#concat 450,532 ops/sec ±0.76% 68
Uint8Array#concat 1,368,911 ops/sec ±1.50% 62
BrowserBuffer#copy(16000) 903,001 ops/sec ±0.96% 67
Uint8Array#copy(16000) 1,422,441 ops/sec ±1.04% 66
BrowserBuffer#copy(16) 11,431,358 ops/sec ±0.46% 69
Uint8Array#copy(16) 13,944,163 ops/sec ±1.12% 68
BrowserBuffer#new(16000) 106,329 ops/sec ±6.70% 44
Uint8Array#new(16000) 131,001 ops/sec ±2.85% 31
BrowserBuffer#new(16) 1,554,491 ops/sec ±1.60% 65
Uint8Array#new(16) 6,623,930 ops/sec ±1.66% 65
BrowserBuffer#readDoubleBE 112,830 ops/sec ±0.51% 69
DataView#getFloat64 93,500 ops/sec ±0.57% 68
BrowserBuffer#readFloatBE 146,678 ops/sec ±0.95% 68
DataView#getFloat32 99,311 ops/sec ±0.41% 67
BrowserBuffer#readUInt32LE 843,214 ops/sec ±0.70% 69
DataView#getUint32 103,024 ops/sec ±0.64% 67
BrowserBuffer#slice 1,013,941 ops/sec ±0.75% 67
Uint8Array#subarray 1,903,928 ops/sec ±0.53% 67
BrowserBuffer#writeFloatBE 61,387 ops/sec ±0.90% 67
DataView#setFloat32 141,249 ops/sec ±0.40% 66

Firefox 33

Method Operations Accuracy Sampled Fastest
BrowserBuffer#bracket-notation 20,800,421 ops/sec ±1.84% 60
Uint8Array#bracket-notation 20,826,235 ops/sec ±2.02% 61
BrowserBuffer#concat 153,076 ops/sec ±2.32% 61
Uint8Array#concat 1,255,674 ops/sec ±8.65% 52
BrowserBuffer#copy(16000) 1,105,312 ops/sec ±1.16% 63
Uint8Array#copy(16000) 1,615,911 ops/sec ±0.55% 66
BrowserBuffer#copy(16) 16,357,599 ops/sec ±0.73% 68
Uint8Array#copy(16) 31,436,281 ops/sec ±1.05% 68
BrowserBuffer#new(16000) 52,995 ops/sec ±6.01% 35
Uint8Array#new(16000) 87,686 ops/sec ±5.68% 45
BrowserBuffer#new(16) 252,031 ops/sec ±1.61% 66
Uint8Array#new(16) 8,477,026 ops/sec ±0.49% 68
BrowserBuffer#readDoubleBE 99,871 ops/sec ±0.41% 69
DataView#getFloat64 285,663 ops/sec ±0.70% 68
BrowserBuffer#readFloatBE 115,540 ops/sec ±0.42% 69
DataView#getFloat32 288,722 ops/sec ±0.82% 68
BrowserBuffer#readUInt32LE 633,926 ops/sec ±1.08% 67
DataView#getUint32 294,808 ops/sec ±0.79% 64
BrowserBuffer#slice 349,425 ops/sec ±0.46% 69
Uint8Array#subarray 5,965,819 ops/sec ±0.60% 65
BrowserBuffer#writeFloatBE 59,980 ops/sec ±0.41% 67
DataView#setFloat32 317,634 ops/sec ±0.63% 68

Safari 8

Method Operations Accuracy Sampled Fastest
BrowserBuffer#bracket-notation 10,279,729 ops/sec ±2.25% 56
Uint8Array#bracket-notation 10,030,767 ops/sec ±2.23% 59
BrowserBuffer#concat 144,138 ops/sec ±1.38% 65
Uint8Array#concat 4,950,764 ops/sec ±1.70% 63
BrowserBuffer#copy(16000) 1,058,548 ops/sec ±1.51% 64
Uint8Array#copy(16000) 1,409,666 ops/sec ±1.17% 65
BrowserBuffer#copy(16) 6,282,529 ops/sec ±1.88% 58
Uint8Array#copy(16) 11,907,128 ops/sec ±2.87% 58
BrowserBuffer#new(16000) 101,663 ops/sec ±3.89% 57
Uint8Array#new(16000) 22,050,818 ops/sec ±6.51% 46
BrowserBuffer#new(16) 176,072 ops/sec ±2.13% 64
Uint8Array#new(16) 24,385,731 ops/sec ±5.01% 51
BrowserBuffer#readDoubleBE 41,341 ops/sec ±1.06% 67
DataView#getFloat64 322,280 ops/sec ±0.84% 68
BrowserBuffer#readFloatBE 46,141 ops/sec ±1.06% 65
DataView#getFloat32 337,025 ops/sec ±0.43% 69
BrowserBuffer#readUInt32LE 151,551 ops/sec ±1.02% 66
DataView#getUint32 308,278 ops/sec ±0.94% 67
BrowserBuffer#slice 197,365 ops/sec ±0.95% 66
Uint8Array#subarray 9,558,024 ops/sec ±3.08% 58
BrowserBuffer#writeFloatBE 17,518 ops/sec ±1.03% 63
DataView#setFloat32 319,751 ops/sec ±0.48% 68

Node 0.11.14

Method Operations Accuracy Sampled Fastest
BrowserBuffer#bracket-notation 10,489,828 ops/sec ±3.25% 90
Uint8Array#bracket-notation 10,534,884 ops/sec ±0.81% 92
NodeBuffer#bracket-notation 10,389,910 ops/sec ±0.97% 87
BrowserBuffer#concat 487,830 ops/sec ±2.58% 88
Uint8Array#concat 1,814,327 ops/sec ±1.28% 88
NodeBuffer#concat 1,636,523 ops/sec ±1.88% 73
BrowserBuffer#copy(16000) 1,073,665 ops/sec ±0.77% 90
Uint8Array#copy(16000) 1,348,517 ops/sec ±0.84% 89
NodeBuffer#copy(16000) 1,289,533 ops/sec ±0.82% 93
BrowserBuffer#copy(16) 12,782,706 ops/sec ±0.74% 85
Uint8Array#copy(16) 14,180,427 ops/sec ±0.93% 92
NodeBuffer#copy(16) 11,083,134 ops/sec ±1.06% 89
BrowserBuffer#new(16000) 141,678 ops/sec ±3.30% 67
Uint8Array#new(16000) 161,491 ops/sec ±2.96% 60
NodeBuffer#new(16000) 292,699 ops/sec ±3.20% 55
BrowserBuffer#new(16) 1,655,466 ops/sec ±2.41% 82
Uint8Array#new(16) 14,399,926 ops/sec ±0.91% 94
NodeBuffer#new(16) 3,894,696 ops/sec ±0.88% 92
BrowserBuffer#readDoubleBE 109,582 ops/sec ±0.75% 93
DataView#getFloat64 91,235 ops/sec ±0.81% 90
NodeBuffer#readDoubleBE 88,593 ops/sec ±0.96% 81
BrowserBuffer#readFloatBE 139,854 ops/sec ±1.03% 85
DataView#getFloat32 98,744 ops/sec ±0.80% 89
NodeBuffer#readFloatBE 92,769 ops/sec ±0.94% 93
BrowserBuffer#readUInt32LE 710,861 ops/sec ±0.82% 92
DataView#getUint32 117,893 ops/sec ±0.84% 91
NodeBuffer#readUInt32LE 851,412 ops/sec ±0.72% 93
BrowserBuffer#slice 1,673,877 ops/sec ±0.73% 94
Uint8Array#subarray 6,919,243 ops/sec ±0.67% 90
NodeBuffer#slice 4,617,604 ops/sec ±0.79% 93
BrowserBuffer#writeFloatBE 66,011 ops/sec ±0.75% 93
DataView#setFloat32 127,760 ops/sec ±0.72% 93
NodeBuffer#writeFloatBE 103,352 ops/sec ±0.83% 93

iojs 1.8.1

Method Operations Accuracy Sampled Fastest
BrowserBuffer#bracket-notation 10,990,488 ops/sec ±1.11% 91
Uint8Array#bracket-notation 11,268,757 ops/sec ±0.65% 97
NodeBuffer#bracket-notation 11,353,260 ops/sec ±0.83% 94
BrowserBuffer#concat 378,954 ops/sec ±0.74% 94
Uint8Array#concat 1,358,288 ops/sec ±0.97% 87
NodeBuffer#concat 1,934,050 ops/sec ±1.11% 78
BrowserBuffer#copy(16000) 894,538 ops/sec ±0.56% 84
Uint8Array#copy(16000) 1,442,656 ops/sec ±0.71% 96
NodeBuffer#copy(16000) 1,457,898 ops/sec ±0.53% 92
BrowserBuffer#copy(16) 12,870,457 ops/sec ±0.67% 95
Uint8Array#copy(16) 16,643,989 ops/sec ±0.61% 93
NodeBuffer#copy(16) 14,885,848 ops/sec ±0.74% 94
BrowserBuffer#new(16000) 109,264 ops/sec ±4.21% 63
Uint8Array#new(16000) 138,916 ops/sec ±1.87% 61
NodeBuffer#new(16000) 281,449 ops/sec ±3.58% 51
BrowserBuffer#new(16) 1,362,935 ops/sec ±0.56% 99
Uint8Array#new(16) 6,193,090 ops/sec ±0.64% 95
NodeBuffer#new(16) 4,745,425 ops/sec ±1.56% 90
BrowserBuffer#readDoubleBE 118,127 ops/sec ±0.59% 93
DataView#getFloat64 107,332 ops/sec ±0.65% 91
NodeBuffer#readDoubleBE 116,274 ops/sec ±0.94% 95
BrowserBuffer#readFloatBE 150,326 ops/sec ±0.58% 95
DataView#getFloat32 110,541 ops/sec ±0.57% 98
NodeBuffer#readFloatBE 121,599 ops/sec ±0.60% 87
BrowserBuffer#readUInt32LE 814,147 ops/sec ±0.62% 93
DataView#getUint32 137,592 ops/sec ±0.64% 90
NodeBuffer#readUInt32LE 931,650 ops/sec ±0.71% 96
BrowserBuffer#slice 878,590 ops/sec ±0.68% 93
Uint8Array#subarray 2,843,308 ops/sec ±1.02% 90
NodeBuffer#slice 4,998,316 ops/sec ±0.68% 90
BrowserBuffer#writeFloatBE 65,927 ops/sec ±0.74% 93
DataView#setFloat32 139,823 ops/sec ±0.97% 89
NodeBuffer#writeFloatBE 135,763 ops/sec ±0.65% 96

Testing the project

First, install the project:

npm install

Then, to run tests in Node.js, run:

npm run test-node

To test locally in a browser, you can run:

npm run test-browser-local

This will print out a URL that you can then open in a browser to run the tests, using Zuul.

To run automated browser tests using Saucelabs, ensure that your SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables are set, then run:

npm test

This is what’s run in Travis, to check against various browsers. The list of browsers is kept in the .zuul.yml file.

JavaScript Standard Style

This module uses JavaScript Standard Style.

JavaScript Style Guide

To test that the code conforms to the style, npm install and run:

./node_modules/.bin/standard

credit

This was originally forked from buffer-browserify.



base NPM version NPM monthly downloads NPM total downloads Linux Build Status

base is the foundation for creating modular, unit testable and highly pluggable node.js applications, starting with a handful of common methods, like set, get, del and use.

Install

Install with npm:

What is Base?

Base is a framework for rapidly creating high quality node.js applications, using plugins like building blocks.

Guiding principles

The core team follows these principles to help guide API decisions:

  • Compact API surface: The smaller the API surface, the easier the library will be to learn and use.
  • Easy to extend: Implementors can use any npm package, and write plugins in pure JavaScript. If you’re building complex apps, Base simplifies inheritance.
  • Easy to test: No special setup should be required to unit test Base or base plugins

Minimal API surface

The API was designed to provide only the minimum necessary functionality for creating a useful application, with or without plugins.

Base core

Base itself ships with only a handful of useful methods, such as:

  • .set: for setting values on the instance
  • .get: for getting values from the instance
  • .has: to check if a property exists on the instance
  • .define: for setting non-enumerable values on the instance
  • .use: for adding plugins

Be generic

When deciding on method to add or remove, we try to answer these questions:

  1. Will all or most Base applications need this method?
  2. Will this method encourage practices or enforce conventions that are beneficial to implementors?
  3. Can or should this be done in a plugin instead?

Composability

Plugin system

It couldn’t be easier to extend Base with any features or custom functionality you can think of.

Base plugins are just functions that take an instance of Base:

Inheritance

Easily inherit Base using .extend:

Inherit or instantiate with a namespace

By default, the .get, .set and .has methods set and get values from the root of the base instance. You can customize this using the .namespace method exposed on the exported function. For example:

API

Usage

Base

Create an instance of Base with the given config and options.

Params

  • config {Object}: If supplied, this object is passed to cache-base to merge onto the the instance upon instantiation.
  • options {Object}: If supplied, this object is used to initialize the base.options object.

Example

.is

Set the given name on app._name and app.is* properties. Used for doing lookups in plugins.

Params

  • name {String}
  • returns {Boolean}

Example

.isRegistered

Returns true if a plugin has already been registered on an instance.

Plugin implementors are encouraged to use this first thing in a plugin to prevent the plugin from being called more than once on the same instance.

Params

  • name {String}: The plugin name.
  • register {Boolean}: If the plugin if not already registered, to record it as being registered pass true as the second argument.
  • returns {Boolean}: Returns true if a plugin is already registered.

Events

  • emits: plugin Emits the name of the plugin being registered. Useful for unit tests, to ensure plugins are only registered once.

Example

.use

Define a plugin function to be called immediately upon init. Plugins are chainable and expose the following arguments to the plugin function:

Params

  • fn {Function}: plugin function to call
  • returns {Object}: Returns the item instance for chaining.

Example

.define

The .define method is used for adding non-enumerable property on the instance. Dot-notation is not supported with define.

Params

  • key {String}: The name of the property to define.
  • value {any}
  • returns {Object}: Returns the instance for chaining.

Example

.mixin

Mix property key onto the Base prototype. If base is inherited using Base.extend this method will be overridden by a new mixin method that will only add properties to the prototype of the inheriting application.

Params

  • key {String}
  • val {Object|Array}
  • returns {Object}: Returns the base instance for chaining.

Example

.base

Getter/setter used when creating nested instances of Base, for storing a reference to the first ancestor instance. This works by setting an instance of Base on the parent property of a “child” instance. The base property defaults to the current instance if no parent property is defined.

Example

#use

Static method for adding global plugin functions that will be added to an instance when created.

Params

  • fn {Function}: Plugin function to use on each instance.
  • returns {Object}: Returns the Base constructor for chaining

Example

#extend

Static method for inheriting the prototype and static methods of the Base class. This method greatly simplifies the process of creating inheritance-based applications. See static-extend for more details.

Params

  • Ctor {Function}: constructor to extend
  • methods {Object}: Optional prototype properties to mix in.
  • returns {Object}: Returns the Base constructor for chaining

Example

#mixin

Used for adding methods to the Base prototype, and/or to the prototype of child instances. When a mixin function returns a function, the returned function is pushed onto the .mixins array, making it available to be used on inheriting classes whenever Base.mixins() is called (e.g. Base.mixins(Child)).

Params

  • fn {Function}: Function to call
  • returns {Object}: Returns the Base constructor for chaining

Example

#mixins

Static method for running global mixin functions against a child constructor. Mixins must be registered before calling this method.

Params

  • Child {Function}: Constructor function of a child class
  • returns {Object}: Returns the Base constructor for chaining

Example

#inherit

Similar to util.inherit, but copies all static properties, prototype properties, and getters/setters from Provider to Receiver. See class-utils for more details.

Params

  • Receiver {Function}: Receiving (child) constructor
  • Provider {Function}: Providing (parent) constructor
  • returns {Object}: Returns the Base constructor for chaining

Example

In the wild

The following node.js applications were built with Base:

Test coverage

Statements   : 98.91% ( 91/92 )
Branches     : 92.86% ( 26/28 )
Functions    : 100% ( 17/17 )
Lines        : 98.9% ( 90/91 )

History

v0.11.2

  • fixes https://github.com/micromatch/micromatch/issues/99

v0.11.0

Breaking changes

  • Static .use and .run methods are now non-enumerable

v0.9.0

Breaking changes

  • .is no longer takes a function, a string must be passed
  • all remaining .debug code has been removed
  • app._namespace was removed (related to debug)
  • .plugin, .use, and .define no longer emit events
  • .assertPlugin was removed
  • .lazy was removed

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Commits Contributor
141 jonschlinkert
30 doowb
3 charlike
1 criticalmash
1 wtgtybhertgeghgtwtg

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on September 07, 2017. npm version Downloads Build Status FOSSA Status
Open Collective Backers Open Collective Sponsors Follow us on Twitter



ESLint

Website | Configuring | Rules | Contributing | Reporting Bugs | Code of Conduct | Twitter | Mailing List | Chat Room

ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions:

  • ESLint uses Espree for JavaScript parsing.
  • ESLint uses an AST to evaluate patterns in code.
  • ESLint is completely pluggable, every single rule is a plugin and you can add more at runtime.

Table of Contents

  1. Installation and Usage
  2. Configuration
  3. Code of Conduct
  4. Filing Issues
  5. Frequently Asked Questions
  6. Releases
  7. Security Policy
  8. Semantic Versioning Policy
  9. Team
  10. Sponsors
  11. Technology Sponsors

Installation and Usage

Prerequisites: Node.js (^10.12.0, or >=12.0.0) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.)

You can install ESLint using npm:

npm install eslint --save-dev

You should then set up a configuration file:

$ ./node_modules/.bin/eslint --init

After that, you can run ESLint on any file or directory like this:

$ ./node_modules/.bin/eslint yourfile.js

Configuration

After running eslint --init, you’ll have a .eslintrc file in your directory. In it, you’ll see some rules configured like this:

The names "semi" and "quotes" are the names of rules in ESLint. The first value is the error level of the rule and can be one of these values:

  • "off" or 0 - turn the rule off
  • "warn" or 1 - turn the rule on as a warning (doesn’t affect exit code)
  • "error" or 2 - turn the rule on as an error (exit code will be 1)

The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the configuration docs).

Code of Conduct

ESLint adheres to the JS Foundation Code of Conduct.

Filing Issues

Before filing an issue, please be sure to read the guidelines for what you’re reporting:

Frequently Asked Questions

I’m using JSCS, should I migrate to ESLint?

Yes. JSCS has reached end of life and is no longer supported.

We have prepared a migration guide to help you convert your JSCS settings to an ESLint configuration.

We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process.

Does Prettier replace ESLint?

No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors.

Why can’t ESLint find my plugins?

  • Make sure your plugins (and ESLint) are both in your project’s package.json as devDependencies (or dependencies, if your project uses ESLint at runtime).
  • Make sure you have run npm install and all your dependencies are installed.
  • Make sure your plugins’ peerDependencies have been installed as well. You can use npm view eslint-plugin-myplugin peerDependencies to see what peer dependencies eslint-plugin-myplugin has.

Does ESLint support JSX?

Yes, ESLint natively supports parsing JSX syntax (this must be enabled in configuration). Please note that supporting JSX syntax is not the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn’t recognize. We recommend using eslint-plugin-react if you are using React and want React semantics.

What ECMAScript versions does ESLint support?

ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through configuration.

What about experimental features?

ESLint’s parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis.

In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the babel-eslint parser and eslint-plugin-babel to use any option available in Babel.

Where to ask for help?

Join our Mailing List or Chatroom.

Releases

We have scheduled releases every two weeks on Friday or Saturday. You can follow a release issue for updates about the scheduling of any particular release.

Security Policy

ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full security policy.

Semantic Versioning Policy

ESLint follows semantic versioning. However, due to the nature of ESLint as a code quality tool, it’s not always clear when a minor or major version bump occurs. To help clarify this for everyone, we’ve defined the following semantic versioning policy for ESLint:

  • Patch release (intended to not break your lint build)
    • A bug fix in a rule that results in ESLint reporting fewer linting errors.
    • A bug fix to the CLI or core (including formatters).
    • Improvements to documentation.
    • Non-user-facing changes such as refactoring code, adding, deleting, or modifying tests, and increasing test coverage.
    • Re-releasing after a failed release (i.e., publishing a release that doesn’t work for anyone).
  • Minor release (might break your lint build)
    • A bug fix in a rule that results in ESLint reporting more linting errors.
    • A new rule is created.
    • A new option to an existing rule that does not result in ESLint reporting more linting errors by default.
    • An existing rule is deprecated.
    • A new CLI capability is created.
    • New capabilities to the public API are added (new classes, new methods, new arguments to existing methods, etc.).
    • A new formatter is created.
    • eslint:recommended is updated and will result in strictly fewer linting errors (e.g., rule removals).
  • Major release (likely to break your lint build)
    • eslint:recommended is updated and may result in new linting errors (e.g., rule additions, most rule option updates).
    • A new option to an existing rule that results in ESLint reporting more linting errors by default.
    • An existing formatter is removed.
    • Part of the public API is removed or changed in an incompatible way. The public API includes:
      • Rule schemas
      • Configuration schema
      • Command-line options
      • Node.js API
      • Rule, formatter, parser, plugin APIs

According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (~) in package.json e.g. "eslint": "~3.1.0" to guarantee the results of your builds.

FOSSA Status

Team

These folks keep the project moving and are resources for help.

Technical Steering Committee (TSC)

The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained.


Nicholas C. Zakas

Brandon Mills

Toru Nagashima

Milos Djermanovic

Reviewers

The people who review and implement new features.


薛定谔的猫

Committers

The people who review and fix bugs and help triage issues.


Pig Fang

Anix

YeonJuan

Sponsors

The following companies, organizations, and individuals support ESLint’s ongoing maintenance and development. Become a Sponsor to get your logo on our README and website.

Platinum Sponsors

Automattic

Gold Sponsors

Chrome's Web Framework & Tools Performance Fund Shopify Salesforce Airbnb Microsoft FOSS Fund Sponsorships

Silver Sponsors

Liftoff AMP Project

Bronze Sponsors

Writers Per Hour 2021 calendar Buy.Fineproxy.Org Veikkaajat.com Anagram Solver Bugsnag Stability Monitoring Mixpanel VPS Server Icons8: free icons, photos, illustrations, and music Discord ThemeIsle Fire Stick Tricks

Technology Sponsors



semver(1) – The semantic versioner for npm

Install

Usage

As a node module:

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

--rtl
        Coerce version strings right to left

--ltr
        Coerce version strings left to right (default)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0
  • ^0.2.3 := >=0.2.3 <0.3.0
  • ^0.0.3 := >=0.0.3 <0.0.4
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0
  • ^0.0.x := >=0.0.0 <0.1.0
  • ^0.0 := >=0.0.0 <0.1.0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0
  • ^0.x := >=0.0.0 <1.0.0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version, options): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).

If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.

Clean

  • clean(version): Clean a string to be a valid semver if possible

This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.

ex. * s.clean(= v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null



semver(1) – The semantic versioner for npm

Install

Usage

As a node module:

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

--rtl
        Coerce version strings right to left

--ltr
        Coerce version strings left to right (default)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0
  • ^0.2.3 := >=0.2.3 <0.3.0
  • ^0.0.3 := >=0.0.3 <0.0.4
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0
  • ^0.0.x := >=0.0.0 <0.1.0
  • ^0.0 := >=0.0.0 <0.1.0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0
  • ^0.x := >=0.0.0 <1.0.0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version, options): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).

If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.

Clean

  • clean(version): Clean a string to be a valid semver if possible

This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.

ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null



semver(1) – The semantic versioner for npm

Install

Usage

As a node module:

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

--rtl
        Coerce version strings right to left

--ltr
        Coerce version strings left to right (default)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0
  • ^0.2.3 := >=0.2.3 <0.3.0
  • ^0.0.3 := >=0.0.3 <0.0.4
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0
  • ^0.0.x := >=0.0.0 <0.1.0
  • ^0.0 := >=0.0.0 <0.1.0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0
  • ^0.x := >=0.0.0 <1.0.0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version, options): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).

If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.

Clean

  • clean(version): Clean a string to be a valid semver if possible

This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.

ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null



semver(1) – The semantic versioner for npm

Install

Usage

As a node module:

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

--rtl
        Coerce version strings right to left

--ltr
        Coerce version strings left to right (default)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0
  • ^0.2.3 := >=0.2.3 <0.3.0
  • ^0.0.3 := >=0.0.3 <0.0.4
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0
  • ^0.0.x := >=0.0.0 <0.1.0
  • ^0.0 := >=0.0.0 <0.1.0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0
  • ^0.x := >=0.0.0 <1.0.0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version, options): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).

If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.

Clean

  • clean(version): Clean a string to be a valid semver if possible

This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.

ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null



body-parser

NPM Version NPM Downloads Build Status Test Coverage

Node.js body parsing middleware.

Parse incoming request bodies in a middleware before your handlers, available under the req.body property.

Note As req.body’s shape is based on user-controlled input, all properties and values in this object are untrusted and should be validated before trusting. For example, req.body.foo.toString() may fail in multiple ways, for example the foo property may not be there or may not be a string, and toString may not be a function and instead a string or other user input.

Learn about the anatomy of an HTTP transaction in Node.js.

This does not handle multipart bodies, due to their complex and typically large nature. For multipart bodies, you may be interested in the following modules:

This module provides the following parsers:

Other body parsers you might be interested in:

Installation

API

The bodyParser object exposes various factories to create middlewares. All middlewares will populate the req.body property with the parsed body when the Content-Type request header matches the type option, or an empty object ({}) if there was no body to parse, the Content-Type was not matched, or an error occurred.

The various errors returned by this module are described in the errors section.

bodyParser.json(options)

Returns middleware that only parses json and only looks at requests where the Content-Type header matches the type option. This parser accepts any Unicode encoding of the body and supports automatic inflation of gzip and deflate encodings.

A new body object containing the parsed data is populated on the request object after the middleware (i.e. req.body).

Options

The json function takes an optional options object that may contain any of the following keys:

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.

reviver

The reviver option is passed directly to JSON.parse as the second argument. You can find more information on this argument in the MDN documentation about JSON.parse.

strict

When set to true, will only accept arrays and objects; when false will accept anything JSON.parse accepts. Defaults to true.

type

The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like json), a mime type (like application/json), or a mime type with a wildcard (like */* or */json). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to application/json.

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.

bodyParser.raw(options)

Returns middleware that parses all bodies as a Buffer and only looks at requests where the Content-Type header matches the type option. This parser supports automatic inflation of gzip and deflate encodings.

A new body object containing the parsed data is populated on the request object after the middleware (i.e. req.body). This will be a Buffer object of the body.

Options

The raw function takes an optional options object that may contain any of the following keys:

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.

type

The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like bin), a mime type (like application/octet-stream), or a mime type with a wildcard (like */* or application/*). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to application/octet-stream.

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.

bodyParser.text(options)

Returns middleware that parses all bodies as a string and only looks at requests where the Content-Type header matches the type option. This parser supports automatic inflation of gzip and deflate encodings.

A new body string containing the parsed data is populated on the request object after the middleware (i.e. req.body). This will be a string of the body.

Options

The text function takes an optional options object that may contain any of the following keys:

defaultCharset

Specify the default character set for the text content if the charset is not specified in the Content-Type header of the request. Defaults to utf-8.

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.

type

The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like txt), a mime type (like text/plain), or a mime type with a wildcard (like */* or text/*). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to text/plain.

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.

bodyParser.urlencoded(options)

Returns middleware that only parses urlencoded bodies and only looks at requests where the Content-Type header matches the type option. This parser accepts only UTF-8 encoding of the body and supports automatic inflation of gzip and deflate encodings.

A new body object containing the parsed data is populated on the request object after the middleware (i.e. req.body). This object will contain key-value pairs, where the value can be a string or array (when extended is false), or any type (when extended is true).

Options

The urlencoded function takes an optional options object that may contain any of the following keys:

extended

The extended option allows to choose between parsing the URL-encoded data with the querystring library (when false) or the qs library (when true). The “extended” syntax allows for rich objects and arrays to be encoded into the URL-encoded format, allowing for a JSON-like experience with URL-encoded. For more information, please see the qs library.

Defaults to true, but using the default has been deprecated. Please research into the difference between qs and querystring and choose the appropriate setting.

inflate

When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.

limit

Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.

parameterLimit

The parameterLimit option controls the maximum number of parameters that are allowed in the URL-encoded data. If a request contains more parameters than this value, a 413 will be returned to the client. Defaults to 1000.

type

The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like urlencoded), a mime type (like application/x-www-form-urlencoded), or a mime type with a wildcard (like */x-www-form-urlencoded). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to application/x-www-form-urlencoded.

verify

The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.

Errors

The middlewares provided by this module create errors depending on the error condition during parsing. The errors will typically have a status/statusCode property that contains the suggested HTTP response code, an expose property to determine if the message property should be displayed to the client, a type property to determine the type of error without matching against the message, and a body property containing the read body, if available.

The following are the common errors emitted, though any error can come through for various reasons.

content encoding unsupported

This error will occur when the request had a Content-Encoding header that contained an encoding but the “inflation” option was set to false. The status property is set to 415, the type property is set to 'encoding.unsupported', and the charset property will be set to the encoding that is unsupported.

request aborted

This error will occur when the request is aborted by the client before reading the body has finished. The received property will be set to the number of bytes received before the request was aborted and the expected property is set to the number of expected bytes. The status property is set to 400 and type property is set to 'request.aborted'.

request entity too large

This error will occur when the request body’s size is larger than the “limit” option. The limit property will be set to the byte limit and the length property will be set to the request body’s length. The status property is set to 413 and the type property is set to 'entity.too.large'.

request size did not match content length

This error will occur when the request’s length did not match the length from the Content-Length header. This typically occurs when the request is malformed, typically when the Content-Length header was calculated based on characters instead of bytes. The status property is set to 400 and the type property is set to 'request.size.invalid'.

stream encoding should not be set

This error will occur when something called the req.setEncoding method prior to this middleware. This module operates directly on bytes only and you cannot call req.setEncoding when using this module. The status property is set to 500 and the type property is set to 'stream.encoding.set'.

too many parameters

This error will occur when the content of the request exceeds the configured parameterLimit for the urlencoded parser. The status property is set to 413 and the type property is set to 'parameters.too.many'.

unsupported charset “BOGUS”

This error will occur when the request had a charset parameter in the Content-Type header, but the iconv-lite module does not support it OR the parser does not support it. The charset is contained in the message as well as in the charset property. The status property is set to 415, the type property is set to 'charset.unsupported', and the charset property is set to the charset that is unsupported.

unsupported content encoding “bogus”

This error will occur when the request had a Content-Encoding header that contained an unsupported encoding. The encoding is contained in the message as well as in the encoding property. The status property is set to 415, the type property is set to 'encoding.unsupported', and the encoding property is set to the encoding that is unsupported.

Examples

Express/Connect top-level generic

This example demonstrates adding a generic JSON and URL-encoded parser as a top-level middleware, which will parse the bodies of all incoming requests. This is the simplest setup.

Express route-specific

This example demonstrates adding body parsers specifically to the routes that need them. In general, this is the most recommended way to use body-parser with Express.

Change accepted type for parsers

All the parsers accept a type option which allows you to change the Content-Type that the middleware will parse.



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]



debug

Build Status Coverage Status Slack OpenCollective OpenCollective

A tiny node.js debugging utility modelled after node core’s debugging technique.

Discussion around the V3 API is under way here

Installation

Usage

debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.

Example app.js:

Example worker.js:

The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:

debug http and worker
debug http and worker
debug worker
debug worker

Windows note

On Windows the environment variable is set using the set command.

 set DEBUG=*,-not_this

Note that PowerShell uses different syntax to set environment variables.

 $env:DEBUG = "*,-not_this"

Then, run the program to be debugged as usual.

Millisecond diff

When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

Conventions

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.

Wildcards

The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.

You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.

Environment Variables

When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:

Name Purpose
DEBUG Enables/disables specific debugging namespaces.
DEBUG_COLORS Whether or not to use colors in the debug output.
DEBUG_DEPTH Object inspection depth.
DEBUG_SHOW_HIDDEN Shows hidden properties on inspected objects.

Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.

Formatters

Debug uses printf-style formatting. Below are the officially supported formatters:

Formatter Representation
%O Pretty-print an Object on multiple lines.
%o Pretty-print an Object all on a single line.
%s String.
%d Number (both integer and float).
%j JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references.
%% Single percent sign (‘%’). This does not consume an argument.

Custom formatters

You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:

Browser support

You can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.

Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:

And then refresh the page.

Web Inspector Colors

Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).

Colored output looks something like:

Output streams

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:

Example stdout.js:

Authors

  • TJ Holowaychuk
  • Nathan Rajlich
  • Andrew Rhyne

Backers

Sponsors

Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]

A light, featureful and explicit option parsing library for node.js.

Why another one? See below. tl;dr: The others I’ve tried are one of too loosey goosey (not explicit), too big/too many deps, or ill specified. YMMV.

Follow @trentmick for updates to node-dashdash.



Install

npm install dashdash



Usage



Longer Example

A more realistic starter script “foo.js” is as follows. This also shows using parser.help() for formatted option help.

Some example output from this script (foo.js):

$ node foo.js -h
# opts: { help: true,
  _order: [ { name: 'help', value: true, from: 'argv' } ],
  _args: [] }
# args: []
usage: node foo.js [OPTIONS]
options:
    --version             Print tool version and exit.
    -h, --help            Print this help and exit.
    -v, --verbose         Verbose output. Use multiple times for more verbose.
    -f FILE, --file=FILE  File to process

$ node foo.js -v
# opts: { verbose: [ true ],
  _order: [ { name: 'verbose', value: true, from: 'argv' } ],
  _args: [] }
# args: []

$ node foo.js --version arg1
# opts: { version: true,
  _order: [ { name: 'version', value: true, from: 'argv' } ],
  _args: [ 'arg1' ] }
# args: [ 'arg1' ]

$ node foo.js -f bar.txt
# opts: { file: 'bar.txt',
  _order: [ { name: 'file', value: 'bar.txt', from: 'argv' } ],
  _args: [] }
# args: []

$ node foo.js -vvv --file=blah
# opts: { verbose: [ true, true, true ],
  file: 'blah',
  _order:
   [ { name: 'verbose', value: true, from: 'argv' },
     { name: 'verbose', value: true, from: 'argv' },
     { name: 'verbose', value: true, from: 'argv' },
     { name: 'file', value: 'blah', from: 'argv' } ],
  _args: [] }
# args: []

See the “examples” dir for a number of starter examples using some of dashdash’s features.



Environment variable integration

If you want to allow environment variables to specify options to your tool, dashdash makes this easy. We can change the ‘verbose’ option in the example above to include an ‘env’ field:

then the “FOO_VERBOSE” environment variable can be used to set this option:

$ FOO_VERBOSE=1 node foo.js
# opts: { verbose: [ true ],
  _order: [ { name: 'verbose', value: true, from: 'env' } ],
  _args: [] }
# args: []

Boolean options will interpret the empty string as unset, ‘0’ as false and anything else as true.

$ FOO_VERBOSE= node examples/foo.js                 # not set
# opts: { _order: [], _args: [] }
# args: []

$ FOO_VERBOSE=0 node examples/foo.js                # '0' is false
# opts: { verbose: [ false ],
  _order: [ { key: 'verbose', value: false, from: 'env' } ],
  _args: [] }
# args: []

$ FOO_VERBOSE=1 node examples/foo.js                # true
# opts: { verbose: [ true ],
  _order: [ { key: 'verbose', value: true, from: 'env' } ],
  _args: [] }
# args: []

$ FOO_VERBOSE=boogabooga node examples/foo.js       # true
# opts: { verbose: [ true ],
  _order: [ { key: 'verbose', value: true, from: 'env' } ],
  _args: [] }
# args: []

Non-booleans can be used as well. Strings:

$ FOO_FILE=data.txt node examples/foo.js
# opts: { file: 'data.txt',
  _order: [ { key: 'file', value: 'data.txt', from: 'env' } ],
  _args: [] }
# args: []

Numbers:

$ FOO_TIMEOUT=5000 node examples/foo.js
# opts: { timeout: 5000,
  _order: [ { key: 'timeout', value: 5000, from: 'env' } ],
  _args: [] }
# args: []

$ FOO_TIMEOUT=blarg node examples/foo.js
foo: error: arg for "FOO_TIMEOUT" is not a positive integer: "blarg"

With the includeEnv: true config to parser.help() the environment variable can also be included in help output:

usage: node foo.js OPTIONS options: –version Print tool version and exit. -h, –help Print this help and exit. -v, –verbose Verbose output. Use multiple times for more verbose. Environment: FOO_VERBOSE=1 -f FILE, –file=FILE File to process



Bash completion

Dashdash provides a simple way to create a Bash completion file that you can place in your “bash_completion.d” directory – sometimes that is “/usr/local/etc/bash_completion.d/”). Features:

  • Does the right thing with “–” to stop options.
  • Custom optarg and arg types for custom completions.

Dashdash will return bash completion file content given a parser instance:

var parser = dashdash.createParser({options: options}); console.log( parser.bashCompletion({name: ‘mycli’}) );

or directly from a options array of options specs:

var code = dashdash.bashCompletionFromOptions({ name: ‘mycli’, options: OPTIONS });

Write that content to “/usr/local/etc/bash_completion.d/mycli” and you will have Bash completions for mycli. Alternatively you can write it to any file (e.g. “~/.bashrc”) and source it.

You could add a --completion hidden option to your tool that emits the completion content and document for your users to call that to install Bash completions.

See examples/ddcompletion.js for a complete example, including how one can define bash functions for completion of custom option types. Also see node-cmdln for how it uses this for Bash completion for full multi-subcommand tools.

  • TODO: document specExtra
  • TODO: document includeHidden
  • TODO: document custom types, function complete\_FOO guide, completionType
  • TODO: document argtypes


Parser config

Parser construction (i.e. dashdash.createParser(CONFIG)) takes the following fields:

  • options (Array of option specs). Required. See the Option specs section below.

  • interspersed (Boolean). Optional. Default is true. If true this allows interspersed arguments and options. I.e.:

      node ./tool.js -v arg1 arg2 -h   # '-h' is after interspersed args

    Set it to false to have ‘-h’ not get parsed as an option in the above example.

  • allowUnknown (Boolean). Optional. Default is false. If false, this causes unknown arguments to throw an error. I.e.:

      node ./tool.js -v arg1 --afe8asefksjefhas

    Set it to true to treat the unknown option as a positional argument.

    Caveat: When a shortopt group, such as -xaz contains a mix of known and unknown options, the entire group is passed through unmolested as a positional argument.

    Consider if you have a known short option -a, and parse the following command line:

      node ./tool.js -xaz

    where -x and -z are unknown. There are multiple ways to interpret this:

    1. -x takes a value: {x: 'az'}
    2. -x and -z are both booleans: {x:true,a:true,z:true}

    Since dashdash does not know what -x and -z are, it can’t know if you’d prefer to receive {a:true,_args:['-x','-z']} or {x:'az'}, or {_args:['-xaz']}. Leaving the positional arg unprocessed is the easiest mistake for the user to recover from.



Option specs

Example using all fields (required fields are noted):

Each option spec in the options array must/can have the following fields:

  • name (String) or names (Array). Required. These give the option name and aliases. The first name (if more than one given) is the key for the parsed opts object.

  • type (String). Required. One of:

    • bool
    • string
    • number
    • integer
    • positiveInteger
    • date (epoch seconds, e.g. 1396031701, or ISO 8601 format YYYY-MM-DD[THH:MM:SS[.sss][Z]], e.g. “2014-03-28T18:35:01.489Z”)
    • arrayOfBool
    • arrayOfString
    • arrayOfNumber
    • arrayOfInteger
    • arrayOfPositiveInteger
    • arrayOfDate

    FWIW, these names attempt to match with asserts on assert-plus. You can add your own custom option types with dashdash.addOptionType. See below.

  • completionType (String). Optional. This is used for Bash completion for an option argument. If not specified, then the value of type is used. Any string may be specified, but only the following values have meaning:

    • none: Provide no completions.
    • file: Bash’s default completion (i.e. complete -o default), which includes filenames.
    • Any string FOO for which a function complete_FOO Bash function is defined. This is for custom completions for a given tool. Typically these custom functions are provided in the specExtra argument to dashdash.bashCompletionFromOptions(). See “examples/ddcompletion.js” for an example.
  • env (String or Array of String). Optional. An environment variable name (or names) that can be used as a fallback for this option. For example, given a “foo.js” like this:

      var options = [{names: ['dry-run', 'n'], env: 'FOO_DRY_RUN'}];
      var opts = dashdash.parse({options: options});

    Both node foo.js --dry-run and FOO_DRY_RUN=1 node foo.js would result in opts.dry_run = true.

    An environment variable is only used as a fallback, i.e. it is ignored if the associated option is given in argv.

  • help (String). Optional. Used for parser.help() output.

  • helpArg (String). Optional. Used in help output as the placeholder for the option argument, e.g. the “PATH” in:

      ...
      -f PATH, --file=PATH    File to process
      ...
  • helpWrap (Boolean). Optional, default true. Set this to false to have that option’s help not be text wrapped in <parser>.help() output.

  • default. Optional. A default value used for this option, if the option isn’t specified in argv.

  • hidden (Boolean). Optional, default false. If true, help output will not include this option. See also the includeHidden option to bashCompletionFromOptions() for Bash completion.



Option group headings

You can add headings between option specs in the options array. To do so, simply add an object with only a group property – the string to print as the heading for the subsequent options in the array. For example:

Note: You can use an empty string, {group: ''}, to get a blank line in help output between groups of options.



Help config

The parser.help(...) function is configurable as follows:

  Options:
      Armament Options:
    ^^  -w WEAPON, --weapon=WEAPON  Weapon with which to crush. One of: |
   /                                sword, spear, maul                  |
  /   General Options:                                                  |
 /      -h, --help                  Print this help and exit.           |
/   ^^^^                            ^                                   |
\       `-- indent                   `-- helpCol              maxCol ---'
 `-- headingIndent
  • indent (Number or String). Default 4. Set to a number (for that many spaces) or a string for the literal indent.
  • headingIndent (Number or String). Default half length of indent. Set to a number (for that many spaces) or a string for the literal indent. This indent applies to group heading lines, between normal option lines.
  • nameSort (String). Default is ‘length’. By default the names are sorted to put the short opts first (i.e. ‘-h, –help’ preferred to ‘–help, -h’). Set to ‘none’ to not do this sorting.
  • maxCol (Number). Default 80. Note that reflow is just done on whitespace so a long token in the option help can overflow maxCol.
  • helpCol (Number). If not set a reasonable value will be determined between minHelpCol and maxHelpCol.
  • minHelpCol (Number). Default 20.
  • maxHelpCol (Number). Default 40.
  • helpWrap (Boolean). Default true. Set to false to have option help strings not be textwrapped to the helpCol..maxCol range.
  • includeEnv (Boolean). Default false. If the option has associated environment variables (via the env option spec attribute), then append mentioned of those envvars to the help string.
  • includeDefault (Boolean). Default false. If the option has a default value (via the default option spec attribute, or a default on the option’s type), then a “Default: VALUE” string will be appended to the help string.


Custom option types

Dashdash includes a good starter set of option types that it will parse for you. However, you can add your own via:

var dashdash = require(‘dashdash’); dashdash.addOptionType({ name: ‘…’, takesArg: true, helpArg: ‘…’, parseArg: function (option, optstr, arg) { … }, array: false, // optional arrayFlatten: false, // optional default: …, // optional completionType: … // optional });

For example, a simple option type that accepts ‘yes’, ‘y’, ‘no’ or ‘n’ as a boolean argument would look like:

var dashdash = require(‘dashdash’);

function parseYesNo(option, optstr, arg) { var argLower = arg.toLowerCase() if (~[‘yes’, ‘y’].indexOf(argLower)) { return true; } else if (~[‘no’, ‘n’].indexOf(argLower)) { return false; } else { throw new Error(format( ‘arg for “%s” is not “yes” or “no”: “%s”’, optstr, arg)); } }

dashdash.addOptionType({ name: ‘yesno’ takesArg: true, helpArg: ‘<yes|no>’, parseArg: parseYesNo });

var options = { {names: [‘answer’, ‘a’], type: ‘yesno’} }; var opts = dashdash.parse({options: options});

See “examples/custom-option-*.js” for other examples. See the addOptionType block comment in “lib/dashdash.js” for more details. Please let me know with an issue if you write a generally useful one.



Why

Why another node.js option parsing lib?

  • nopt really is just for “tools like npm”. Implicit opts (e.g. ‘–no-foo’ works for every ‘–foo’). Can’t disable abbreviated opts. Can’t do multiple usages of same opt, e.g. ‘-vvv’ (I think). Can’t do grouped short opts.

  • optimist has surprise interpretation of options (at least to me). Implicit opts mean ambiguities and poor error handling for fat-fingering. process.exit calls makes it hard to use as a libary.

  • optparse Incomplete docs. Is this an attempted clone of Python’s optparse. Not clear. Some divergence. parser.on("name", ...) API is weird.

  • argparse Dep on underscore. No thanks just for option processing. find lib | wc -l -> 26. Overkill. Argparse is a bit different anyway. Not sure I want that.

  • posix-getopt No type validation. Though that isn’t a killer. AFAIK can’t have a long opt without a short alias. I.e. no getopt_long semantics. Also, no whizbang features like generated help output.

  • “commander.js”: I wrote a critique a while back. It seems fine, but last I checked had an outstanding bug that would prevent me from using it.





qs Version Badge

npm badge

A querystring parsing and stringifying library with some added security.

Lead Maintainer: Jordan Harband

The qs module was originally created and maintained by TJ Holowaychuk.

Usage

Parsing Objects

qs allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets []. For example, the string 'foo[bar]=baz' converts to:

When using the plainObjects option the parsed value is returned as a null object, created via Object.create(null) and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:

By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.

URI encoded strings work too:

You can also nest your objects, like 'foo[bar][baz]=foobarbaz':

By default, when nesting objects qs will only parse up to 5 children deep. This means if you attempt to parse a string like 'a[b][c][d][e][f][g][h][i]=j' your resulting object will be:

This depth can be overridden by passing a depth option to qs.parse(string, [options]):

The depth limit helps mitigate abuse when qs is used to parse user input, and it is recommended to keep it a reasonably small number.

For similar reasons, by default qs will only parse up to 1000 parameters. This can be overridden by passing a parameterLimit option:

To bypass the leading question mark, use ignoreQueryPrefix:

An optional delimiter can also be passed:

Delimiters can be a regular expression too:

Option allowDots can be used to enable dot notation:

If you have to deal with legacy browsers or services, there’s also support for decoding percent-encoded octets as iso-8859-1:

Some services add an initial utf8=✓ value to forms so that old Internet Explorer versions are more likely to submit the form as utf-8. Additionally, the server can check the value against wrong encodings of the checkmark character and detect that a query string or application/x-www-form-urlencoded body was not sent as utf-8, eg. if the form had an accept-charset parameter or the containing page had a different character set.

qs supports this mechanism via the charsetSentinel option. If specified, the utf8 parameter will be omitted from the returned object. It will be used to switch to iso-8859-1/utf-8 mode depending on how the checkmark is encoded.

Important: When you specify both the charset option and the charsetSentinel option, the charset will be overridden when the request contains a utf8 parameter from which the actual charset can be deduced. In that sense the charset will behave as the default charset rather than the authoritative charset.

If you want to decode the &#...; syntax to the actual character, you can specify the interpretNumericEntities option as well:

It also works when the charset has been detected in charsetSentinel mode.

Parsing Arrays

qs can also parse arrays using a similar [] notation:

You may specify an index as well:

Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, qs will compact a sparse array to only the existing values preserving their order:

Note that an empty string is also a value, and will be preserved:

qs will also limit specifying indices in an array to a maximum index of 20. Any array members with an index of greater than 20 will instead be converted to an object with the index as the key. This is needed to handle cases when someone sent, for example, a[999999999] and it will take significant time to iterate over this huge array.

This limit can be overridden by passing an arrayLimit option:

To disable array parsing entirely, set parseArrays to false.

If you mix notations, qs will merge the two items into an object:

You can also create arrays of objects:

Some people use comma to join array, qs can parse it:

(this cannot convert nested objects, such as a={b:1},{c:d})

Stringifying

When stringifying, qs by default URI encodes output. Objects are stringified as you would expect:

This encoding can be disabled by setting the encode option to false:

Encoding can be disabled for keys by setting the encodeValuesOnly option to true:

This encoding can also be replaced by a custom encoding method set as encoder option:

(Note: the encoder option does not apply if encode is false)

Analogue to the encoder there is a decoder option for parse to override decoding of properties and values:

Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases will be URI encoded during real usage.

When arrays are stringified, by default they are given explicit indices:

You may override this by setting the indices option to false:

You may use the arrayFormat option to specify the format of the output array:

When objects are stringified, by default they use bracket notation:

You may override this to use dot notation by setting the allowDots option to true:

Empty strings and null values will omit the value, but the equals sign (=) remains in place:

Key with no values (such as an empty object or array) will return nothing:

Properties that are set to undefined will be omitted entirely:

The query string may optionally be prepended with a question mark:

The delimiter may be overridden with stringify as well:

If you only want to override the serialization of Date objects, you can provide a serializeDate option:

You may use the sort option to affect the order of parameter keys:

Finally, you can use the filter option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification:

Handling of null values

By default, null values are treated like empty strings:

Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.

To distinguish between null values and empty strings use the strictNullHandling flag. In the result string the null values have no = sign:

To parse values without = back to null use the strictNullHandling flag:

To completely skip rendering keys with null values, use the skipNulls flag:

If you’re communicating with legacy systems, you can switch to iso-8859-1 using the charset option:

Characters that don’t exist in iso-8859-1 will be converted to numeric entities, similar to what browsers do:

You can use the charsetSentinel option to announce the character by including an utf8=✓ parameter with the proper encoding if the checkmark, similar to what Ruby on Rails and others do when submitting forms.

Dealing with special character sets

By default the encoding and decoding of characters is done in utf-8, and iso-8859-1 support is also built in via the charset parameter.

If you wish to encode querystrings to a different character set (i.e. Shift JIS) you can use the qs-iconv library:

This also works for decoding of query strings:

RFC 3986 and RFC 1738 space encoding

RFC3986 used as default option and encodes ’ ’ to %20 which is backward compatible. In the same time, output can be stringified as per RFC1738 with ’ ’ equal to ‘+’.

assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');


qs Version Badge

npm badge

A querystring parsing and stringifying library with some added security.

Lead Maintainer: Jordan Harband

The qs module was originally created and maintained by TJ Holowaychuk.

Usage

Parsing Objects

qs allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets []. For example, the string 'foo[bar]=baz' converts to:

When using the plainObjects option the parsed value is returned as a null object, created via Object.create(null) and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:

By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.

URI encoded strings work too:

You can also nest your objects, like 'foo[bar][baz]=foobarbaz':

By default, when nesting objects qs will only parse up to 5 children deep. This means if you attempt to parse a string like 'a[b][c][d][e][f][g][h][i]=j' your resulting object will be:

This depth can be overridden by passing a depth option to qs.parse(string, [options]):

The depth limit helps mitigate abuse when qs is used to parse user input, and it is recommended to keep it a reasonably small number.

For similar reasons, by default qs will only parse up to 1000 parameters. This can be overridden by passing a parameterLimit option:

To bypass the leading question mark, use ignoreQueryPrefix:

An optional delimiter can also be passed:

Delimiters can be a regular expression too:

Option allowDots can be used to enable dot notation:

If you have to deal with legacy browsers or services, there’s also support for decoding percent-encoded octets as iso-8859-1:

Some services add an initial utf8=✓ value to forms so that old Internet Explorer versions are more likely to submit the form as utf-8. Additionally, the server can check the value against wrong encodings of the checkmark character and detect that a query string or application/x-www-form-urlencoded body was not sent as utf-8, eg. if the form had an accept-charset parameter or the containing page had a different character set.

qs supports this mechanism via the charsetSentinel option. If specified, the utf8 parameter will be omitted from the returned object. It will be used to switch to iso-8859-1/utf-8 mode depending on how the checkmark is encoded.

Important: When you specify both the charset option and the charsetSentinel option, the charset will be overridden when the request contains a utf8 parameter from which the actual charset can be deduced. In that sense the charset will behave as the default charset rather than the authoritative charset.

If you want to decode the &#...; syntax to the actual character, you can specify the interpretNumericEntities option as well:

It also works when the charset has been detected in charsetSentinel mode.

Parsing Arrays

qs can also parse arrays using a similar [] notation:

You may specify an index as well:

Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, qs will compact a sparse array to only the existing values preserving their order:

Note that an empty string is also a value, and will be preserved:

qs will also limit specifying indices in an array to a maximum index of 20. Any array members with an index of greater than 20 will instead be converted to an object with the index as the key. This is needed to handle cases when someone sent, for example, a[999999999] and it will take significant time to iterate over this huge array.

This limit can be overridden by passing an arrayLimit option:

To disable array parsing entirely, set parseArrays to false.

If you mix notations, qs will merge the two items into an object:

You can also create arrays of objects:

Some people use comma to join array, qs can parse it:

(this cannot convert nested objects, such as a={b:1},{c:d})

Stringifying

When stringifying, qs by default URI encodes output. Objects are stringified as you would expect:

This encoding can be disabled by setting the encode option to false:

Encoding can be disabled for keys by setting the encodeValuesOnly option to true:

This encoding can also be replaced by a custom encoding method set as encoder option:

(Note: the encoder option does not apply if encode is false)

Analogue to the encoder there is a decoder option for parse to override decoding of properties and values:

Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases will be URI encoded during real usage.

When arrays are stringified, by default they are given explicit indices:

You may override this by setting the indices option to false:

You may use the arrayFormat option to specify the format of the output array:

When objects are stringified, by default they use bracket notation:

You may override this to use dot notation by setting the allowDots option to true:

Empty strings and null values will omit the value, but the equals sign (=) remains in place:

Key with no values (such as an empty object or array) will return nothing:

Properties that are set to undefined will be omitted entirely:

The query string may optionally be prepended with a question mark:

The delimiter may be overridden with stringify as well:

If you only want to override the serialization of Date objects, you can provide a serializeDate option:

You may use the sort option to affect the order of parameter keys:

Finally, you can use the filter option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification:

Handling of null values

By default, null values are treated like empty strings:

Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.

To distinguish between null values and empty strings use the strictNullHandling flag. In the result string the null values have no = sign:

To parse values without = back to null use the strictNullHandling flag:

To completely skip rendering keys with null values, use the skipNulls flag:

If you’re communicating with legacy systems, you can switch to iso-8859-1 using the charset option:

Characters that don’t exist in iso-8859-1 will be converted to numeric entities, similar to what browsers do:

You can use the charsetSentinel option to announce the character by including an utf8=✓ parameter with the proper encoding if the checkmark, similar to what Ruby on Rails and others do when submitting forms.

Dealing with special character sets

By default the encoding and decoding of characters is done in utf-8, and iso-8859-1 support is also built in via the charset parameter.

If you wish to encode querystrings to a different character set (i.e. Shift JIS) you can use the qs-iconv library:

This also works for decoding of query strings:

RFC 3986 and RFC 1738 space encoding

RFC3986 used as default option and encodes ’ ’ to %20 which is backward compatible. In the same time, output can be stringified as per RFC1738 with ’ ’ equal to ‘+’.

assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');

# Table

GitSpo Mentions Travis build status Coveralls NPM version Canonical Code Style Twitter Follow

Produces a string that represents array data in a text table.

Demo of table displaying a list of missions to the Moon.
Demo of table displaying a list of missions to the Moon.

## Features

  • Works with strings containing fullwidth characters.
  • Works with strings containing ANSI escape codes.
  • Configurable border characters.
  • Configurable content alignment per column.
  • Configurable content padding per column.
  • Configurable column width.
  • Text wrapping.

## Install

Buy Me A Coffee Become a Patron

## Usage

Table data is described using an array (rows) of array (cells).

import tableImport from 'table';
const { table } = tableImport;

// Using commonjs?
// const {table} = require('table');

let data,
    output;

data = [
    ['0A', '0B', '0C'],
    ['1A', '1B', '1C'],
    ['2A', '2B', '2C']
];

/**
 * @typedef {string} table~cell
 */

/**
 * @typedef {table~cell[]} table~row
 */

/**
 * @typedef {Object} table~columns
 * @property {string} alignment Cell content alignment (enum: left, center, right) (default: left).
 * @property {number} width Column width (default: auto).
 * @property {number} truncate Number of characters are which the content will be truncated (default: Infinity).
 * @property {number} paddingLeft Cell content padding width left (default: 1).
 * @property {number} paddingRight Cell content padding width right (default: 1).
 */

/**
 * @typedef {Object} table~border
 * @property {string} topBody
 * @property {string} topJoin
 * @property {string} topLeft
 * @property {string} topRight
 * @property {string} bottomBody
 * @property {string} bottomJoin
 * @property {string} bottomLeft
 * @property {string} bottomRight
 * @property {string} bodyLeft
 * @property {string} bodyRight
 * @property {string} bodyJoin
 * @property {string} joinBody
 * @property {string} joinLeft
 * @property {string} joinRight
 * @property {string} joinJoin
 */

/**
 * Used to dynamically tell table whether to draw a line separating rows or not.
 * The default behavior is to always return true.
 *
 * @typedef {function} drawHorizontalLine
 * @param {number} index
 * @param {number} size
 * @return {boolean}
 */

/**
 * @typedef {Object} table~config
 * @property {table~border} border
 * @property {table~columns[]} columns Column specific configuration.
 * @property {table~columns} columnDefault Default values for all columns. Column specific settings overwrite the default values.
 * @property {table~drawHorizontalLine} drawHorizontalLine
 */

/**
 * Generates a text table.
 *
 * @param {table~row[]} rows
 * @param {table~config} config
 * @return {String}
 */
output = table(data);

console.log(output);
╔════╤════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────┼────╢
║ 1A │ 1B │ 1C ║
╟────┼────┼────╢
║ 2A │ 2B │ 2C ║
╚════╧════╧════╝

### Cell Content Alignment

{string} config.columns[{number}].alignment property controls content horizontal alignment within a cell.

Valid values are: “left”, “right” and “center”.

╔════════════╤════════════╤════════════╗
║ 0A         │     0B     │         0C ║
╟────────────┼────────────┼────────────╢
║ 1A         │     1B     │         1C ║
╟────────────┼────────────┼────────────╢
║ 2A         │     2B     │         2C ║
╚════════════╧════════════╧════════════╝

### Column Width

{number} config.columns[{number}].width property restricts column width to a fixed width.

╔════╤════════════╤════╗
║ 0A │ 0B         │ 0C ║
╟────┼────────────┼────╢
║ 1A │ 1B         │ 1C ║
╟────┼────────────┼────╢
║ 2A │ 2B         │ 2C ║
╚════╧════════════╧════╝

### Custom Border

{object} config.border property describes characters used to draw the table border.

┌────┬────┬────┐
│ 0A │ 0B │ 0C │
├────┼────┼────┤
│ 1A │ 1B │ 1C │
├────┼────┼────┤
│ 2A │ 2B │ 2C │
└────┴────┴────┘

### Draw Horizontal Line

{function} config.drawHorizontalLine property is a function that is called for every non-content row in the table. The result of the function {boolean} determines whether a row is drawn.

╔════╤════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────┼────╢
║ 1A │ 1B │ 1C ║
║ 2A │ 2B │ 2C ║
║ 3A │ 3B │ 3C ║
╟────┼────┼────╢
║ 4A │ 4B │ 4C ║
╚════╧════╧════╝

### Single Line Mode

Horizontal lines inside the table are not drawn.

╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗
║ -rw-r--r--  │ 1   │ pandorym │ staff │ 1529   │ May 23 11:25 │ LICENSE           ║
║ -rw-r--r--  │ 1   │ pandorym │ staff │ 16327  │ May 23 11:58 │ README.md         ║
║ drwxr-xr-x  │ 76  │ pandorym │ staff │ 2432   │ May 23 12:02 │ dist              ║
║ drwxr-xr-x  │ 634 │ pandorym │ staff │ 20288  │ May 23 11:54 │ node_modules      ║
║ -rw-r--r--  │ 1,  │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║
║ -rw-r--r--@ │ 1   │ pandorym │ staff │ 2440   │ May 23 11:25 │ package.json      ║
║ drwxr-xr-x  │ 27  │ pandorym │ staff │ 864    │ May 23 11:25 │ src               ║
║ drwxr-xr-x  │ 20  │ pandorym │ staff │ 640    │ May 23 11:25 │ test              ║
╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝

### Padding Cell Content

{number} config.columns[{number}].paddingLeft and {number} config.columns[{number}].paddingRight properties control content padding within a cell. Property value represents a number of whitespaces used to pad the content.

╔══════╤══════╤════╗
║   0A │ AA   │ 0C ║
║      │ BB   │    ║
║      │ CC   │    ║
╟──────┼──────┼────╢
║   1A │ 1B   │ 1C ║
╟──────┼──────┼────╢
║   2A │ 2B   │ 2C ║
╚══════╧══════╧════╝

### Predefined Border Templates

You can load one of the predefined border templates using getBorderCharacters function.

# honeywell

╔════╤════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────┼────╢
║ 1A │ 1B │ 1C ║
╟────┼────┼────╢
║ 2A │ 2B │ 2C ║
╚════╧════╧════╝

# norc

┌────┬────┬────┐
│ 0A │ 0B │ 0C │
├────┼────┼────┤
│ 1A │ 1B │ 1C │
├────┼────┼────┤
│ 2A │ 2B │ 2C │
└────┴────┴────┘

# ramac (ASCII; for use in terminals that do not support Unicode characters)

+----+----+----+
| 0A | 0B | 0C |
|----|----|----|
| 1A | 1B | 1C |
|----|----|----|
| 2A | 2B | 2C |
+----+----+----+

# void (no borders; see "borderless table" section of the documentation)

 0A  0B  0C

 1A  1B  1C

 2A  2B  2C

Raise an issue if you’d like to contribute a new border template.

#### Borderless Table

Simply using “void” border character template creates a table with a lot of unnecessary spacing.

To create a more plesant to the eye table, reset the padding and remove the joining rows, e.g.

0A 0B 0C
1A 1B 1C
2A 2B 2C

### Streaming

table package exports createStream function used to draw a table and append rows.

createStream requires {number} columnDefault.width and {number} columnCount configuration properties.

Streaming current date.
Streaming current date.

table package uses ANSI escape codes to overwrite the output of the last line when a new row is printed.

The underlying implementation is explained in this Stack Overflow answer.

Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g.

Streaming random data.
Streaming random data.

### Text Truncation

To handle a content that overflows the container width, table package implements text wrapping. However, sometimes you may want to truncate content that is too long to be displayed in the table.

{number} config.columns[{number}].truncate property (default: Infinity) truncates the text at the specified length.

╔══════════════════════╗
║ Lorem ipsum dolor si ║
║ t amet, consectetur  ║
║ adipiscing elit. Pha ║
║ sellus pulvinar nibh ║
║ sed mauris conva...  ║
╚══════════════════════╝

### Text Wrapping

table package implements auto text wrapping, i.e. text that has width greater than the container width will be separated into multiple lines, e.g.

╔══════════════════════╗
║ Lorem ipsum dolor si ║
║ t amet, consectetur  ║
║ adipiscing elit. Pha ║
║ sellus pulvinar nibh ║
║ sed mauris convallis ║
║ dapibus. Nunc venena ║
║ tis tempus nulla sit ║
║ amet viverra.        ║
╚══════════════════════╝

When wrapWord is true the text is broken at the nearest space or one of the special characters (“-”, "_“,”", “/”, “.”, “,”, “;”), e.g.

╔══════════════════════╗
║ Lorem ipsum dolor    ║
║ sit amet,            ║
║ consectetur          ║
║ adipiscing elit.     ║
║ Phasellus pulvinar   ║
║ nibh sed mauris      ║
║ convallis dapibus.   ║
║ Nunc venenatis       ║
║ tempus nulla sit     ║
║ amet viverra.        ║
╚══════════════════════╝


cosmiconfig

Build Status Build status codecov

Cosmiconfig searches for and loads configuration for your program.

It features smart defaults based on conventional expectations in the JavaScript ecosystem. But it’s also flexible enough to search wherever you’d like to search, and load whatever you’d like to load.

By default, Cosmiconfig will start where you tell it to start and search up the directory tree for the following:

  • a package.json property
  • a JSON or YAML, extensionless “rc file”
  • an “rc file” with the extensions .json, .yaml, .yml, .js, or .cjs
  • a .config.js or .config.cjs CommonJS module

For example, if your module’s name is “myapp”, cosmiconfig will search up the directory tree for configuration in the following places:

  • a myapp property in package.json
  • a .myapprc file in JSON or YAML format
  • a .myapprc.json, .myapprc.yaml, .myapprc.yml, .myapprc.js, or .myapprc.cjs file
  • a myapp.config.js or myapp.config.cjs CommonJS module exporting an object

Cosmiconfig continues to search up the directory tree, checking each of these places in each directory, until it finds some acceptable configuration (or hits the home directory).

Table of contents

Installation

npm install cosmiconfig

Tested in Node 10+.

Usage

Create a Cosmiconfig explorer, then either search for or directly load a configuration file.

Result

The result object you get from search or load has the following properties:

  • config: The parsed configuration object. undefined if the file is empty.
  • filepath: The path to the configuration file that was found.
  • isEmpty: true if the configuration file is empty. This property will not be present if the configuration file is not empty.

Asynchronous API

cosmiconfig()

Creates a cosmiconfig instance (“explorer”) configured according to the arguments, and initializes its caches.

moduleName

Type: string. Required.

Your module name. This is used to create the default searchPlaces and packageProp.

If your searchPlaces value will include files, as it does by default (e.g. ${moduleName}rc), your moduleName must consist of characters allowed in filenames. That means you should not copy scoped package names, such as @my-org/my-package, directly into moduleName.

cosmiconfigOptions are documented below. You may not need them, and should first read about the functions you’ll use.

explorer.search()

Searches for a configuration file. Returns a Promise that resolves with a result or with null, if no configuration file is found.

You can do the same thing synchronously with explorerSync.search().

Let’s say your module name is goldengrahams so you initialized with const explorer = cosmiconfig('goldengrahams');. Here’s how your default search() will work:

  • Starting from process.cwd() (or some other directory defined by the searchFrom argument to search()), look for configuration objects in the following places:
    1. A goldengrahams property in a package.json file.
    2. A .goldengrahamsrc file with JSON or YAML syntax.
    3. A .goldengrahamsrc.json, .goldengrahamsrc.yaml, .goldengrahamsrc.yml, .goldengrahamsrc.js, or .goldengrahamsrc.cjs file.
    4. A goldengrahams.config.js or goldengrahams.config.cjs CommonJS module exporting the object.
  • If none of those searches reveal a configuration object, move up one directory level and try again. So the search continues in ./, ../, ../../, ../../../, etc., checking the same places in each directory.
  • Continue searching until arriving at your home directory (or some other directory defined by the cosmiconfig option stopDir).
  • If at any point a parsable configuration is found, the search() Promise resolves with its result (or, with explorerSync.search(), the result is returned).
  • If no configuration object is found, the search() Promise resolves with null (or, with explorerSync.search(), null is returned).
  • If a configuration object is found but is malformed (causing a parsing error), the search() Promise rejects with that error (so you should .catch() it). (Or, with explorerSync.search(), the error is thrown.)

If you know exactly where your configuration file should be, you can use load(), instead.

The search process is highly customizable. Use the cosmiconfig options searchPlaces and loaders to precisely define where you want to look for configurations and how you want to load them.

searchFrom

Type: string. Default: process.cwd().

A filename. search() will start its search here.

If the value is a directory, that’s where the search starts. If it’s a file, the search starts in that file’s directory.

explorer.load()

Loads a configuration file. Returns a Promise that resolves with a result or rejects with an error (if the file does not exist or cannot be loaded).

Use load if you already know where the configuration file is and you just need to load it.

If you load a package.json file, the result will be derived from whatever property is specified as your packageProp.

You can do the same thing synchronously with explorerSync.load().

explorer.clearLoadCache()

Clears the cache used in load().

explorer.clearSearchCache()

Clears the cache used in search().

explorer.clearCaches()

Performs both clearLoadCache() and clearSearchCache().

Synchronous API

cosmiconfigSync()

Creates a synchronous cosmiconfig instance (“explorerSync”) configured according to the arguments, and initializes its caches.

See cosmiconfig().

explorerSync.search()

Synchronous version of explorer.search().

Returns a result or null.

explorerSync.load()

Synchronous version of explorer.load().

Returns a result.

explorerSync.clearLoadCache()

Clears the cache used in load().

explorerSync.clearSearchCache()

Clears the cache used in search().

explorerSync.clearCaches()

Performs both clearLoadCache() and clearSearchCache().

cosmiconfigOptions

Type: Object.

Possible options are documented below.

searchPlaces

Type: Array<string>. Default: See below.

An array of places that search() will check in each directory as it moves up the directory tree. Each place is relative to the directory being searched, and the places are checked in the specified order.

Default searchPlaces:

Create your own array to search more, fewer, or altogether different places.

Every item in searchPlaces needs to have a loader in loaders that corresponds to its extension. (Common extensions are covered by default loaders.) Read more about loaders below.

package.json is a special value: When it is included in searchPlaces, Cosmiconfig will always parse it as JSON and load a property within it, not the whole file. That property is defined with the packageProp option, and defaults to your module name.

Examples, with a module named porgy:

loaders

Type: Object. Default: See below.

An object that maps extensions to the loader functions responsible for loading and parsing files with those extensions.

Cosmiconfig exposes its default loaders on a named export defaultLoaders.

Default loaders:

(YAML is a superset of JSON; which means YAML parsers can parse JSON; which is how extensionless files can be either YAML or JSON with only one parser.)

If you provide a loaders object, your object will be merged with the defaults. So you can override one or two without having to override them all.

Keys in loaders are extensions (starting with a period), or noExt to specify the loader for files without extensions, like .myapprc.

Values in loaders are a loader function (described below) whose values are loader functions.

The most common use case for custom loaders value is to load extensionless rc files as strict JSON, instead of JSON or YAML (the default). To accomplish that, provide the following loaders value:

If you want to load files that are not handled by the loader functions Cosmiconfig exposes, you can write a custom loader function or use one from NPM if it exists.

Third-party loaders:

  • [@endemolshinegroup/cosmiconfig-typescript-loader](https://github.com/EndemolShineGroup/cosmiconfig-typescript-loader)

Use cases for custom loader function:

  • Allow configuration syntaxes that aren’t handled by Cosmiconfig’s defaults, like JSON5, INI, or XML.
  • Allow ES2015 modules from .mjs configuration files.
  • Parse JS files with Babel before deriving the configuration.

Custom loader functions have the following signature:

Cosmiconfig reads the file when it checks whether the file exists, so it will provide you with both the file’s path and its content. Do whatever you need to, and return either a configuration object or null (or, for async-only loaders, a Promise that resolves with one of those). null indicates that no real configuration was found and the search should continue.

A few things to note:

  • If you use a custom loader, be aware of whether it’s sync or async: you cannot use async customer loaders with the sync API (cosmiconfigSync()).
  • Special JS syntax can also be handled by using a require hook, because defaultLoaders['.js'] just uses require. Whether you use custom loaders or a require hook is up to you.

Examples:

packageProp

Type: string | Array<string>. Default: `${moduleName}`.

Name of the property in package.json to look for.

Use a period-delimited string or an array of strings to describe a path to nested properties.

For example, the value 'configs.myPackage' or ['configs', 'myPackage'] will get you the "myPackage" value in a package.json like this:

If nested property names within the path include periods, you need to use an array of strings. For example, the value ['configs', 'foo.bar', 'baz'] will get you the "baz" value in a package.json like this:

If a string includes period but corresponds to a top-level property name, it will not be interpreted as a period-delimited path. For example, the value 'one.two' will get you the "three" value in a package.json like this:

stopDir

Type: string. Default: Absolute path to your home directory.

Directory where the search will stop.

cache

Type: boolean. Default: true.

If false, no caches will be used. Read more about “Caching” below.

transform

Type: (Result) => Promise<Result> | Result.

A function that transforms the parsed configuration. Receives the result.

If using search() or load() (which are async), the transform function can return the transformed result or return a Promise that resolves with the transformed result. If using cosmiconfigSync, search() or load(), the function must be synchronous and return the transformed result.

The reason you might use this option — instead of simply applying your transform function some other way — is that the transformed result will be cached. If your transformation involves additional filesystem I/O or other potentially slow processing, you can use this option to avoid repeating those steps every time a given configuration is searched or loaded.

ignoreEmptySearchPlaces

Type: boolean. Default: true.

By default, if search() encounters an empty file (containing nothing but whitespace) in one of the searchPlaces, it will ignore the empty file and move on. If you’d like to load empty configuration files, instead, set this option to false.

Why might you want to load empty configuration files? If you want to throw an error, or if an empty configuration file means something to your program.

Caching

As of v2, cosmiconfig uses caching to reduce the need for repetitious reading of the filesystem or expensive transforms. Every new cosmiconfig instance (created with cosmiconfig()) has its own caches.

To avoid or work around caching, you can do the following:

Differences from rc

rc serves its focused purpose well. cosmiconfig differs in a few key ways — making it more useful for some projects, less useful for others:

  • Looks for configuration in some different places: in a package.json property, an rc file, a .config.js file, and rc files with extensions.
  • Built-in support for JSON, YAML, and CommonJS formats.
  • Stops at the first configuration found, instead of finding all that can be found up the directory tree and merging them automatically.
  • Options.
  • Asynchronous by default (though can be run synchronously).

Contributing & Development

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

And please do participate!

Google Inc. logo



Google Auth Library

npm version codecov Dependencies Known Vulnerabilities

This is Google’s officially supported node.js client library for using OAuth 2.0 authorization and authentication with Google APIs.

Installation

This library is distributed on npm. To add it as a dependency, run the following command:

Ways to authenticate

This library provides a variety of ways to authenticate to your Google services. - Application Default Credentials - Use Application Default Credentials when you use a single identity for all users in your application. Especially useful for applications running on Google Cloud. - OAuth 2 - Use OAuth2 when you need to perform actions on behalf of the end user. - JSON Web Tokens - Use JWT when you are using a single identity for all users. Especially useful for server->server or server->API communication. - Google Compute - Directly use a service account on Google Cloud Platform. Useful for server->server or server->API communication.

Application Default Credentials

This library provides an implementation of Application Default Credentials for Node.js. The Application Default Credentials provide a simple way to get authorization credentials for use in calling Google APIs.

They are best suited for cases when the call needs to have the same identity and authorization level for the application independent of the user. This is the recommended approach to authorize calls to Cloud APIs, particularly when you’re building an application that uses Google Cloud Platform.

Download your Service Account Credentials JSON file

To use Application Default Credentials, You first need to download a set of JSON credentials for your project. Go to APIs & Auth > Credentials in the Google Developers Console and select Service account from the Add credentials dropdown.

This file is your only copy of these credentials. It should never be committed with your source code, and should be stored securely.

Once downloaded, store the path to this file in the GOOGLE_APPLICATION_CREDENTIALS environment variable.

Enable the API you want to use

Before making your API call, you must be sure the API you’re calling has been enabled. Go to APIs & Auth > APIs in the Google Developers Console and enable the APIs you’d like to call. For the example below, you must enable the DNS API.

Choosing the correct credential type automatically

Rather than manually creating an OAuth2 client, JWT client, or Compute client, the auth library can create the correct credential type for you, depending upon the environment your code is running under.

For example, a JWT auth client will be created when your code is running on your local developer machine, and a Compute client will be created when the same code is running on Google Cloud Platform. If you need a specific set of scopes, you can pass those in the form of a string or an array to the GoogleAuth constructor.

The code below shows how to retrieve a default credential type, depending upon the runtime environment.

OAuth2

This library comes with an OAuth2 client that allows you to retrieve an access token and refreshes the token and retry the request seamlessly if you also provide an expiry_date and the token is expired. The basics of Google’s OAuth2 implementation is explained on Google Authorization and Authentication documentation.

In the following examples, you may need a CLIENT_ID, CLIENT_SECRET and REDIRECT_URL. You can find these pieces of information by going to the Developer Console, clicking your project > APIs & auth > credentials.

For more information about OAuth2 and how it works, see here.

A complete OAuth2 example

Let’s take a look at a complete example.

const {OAuth2Client} = require('google-auth-library');
const http = require('http');
const url = require('url');
const open = require('open');
const destroyer = require('server-destroy');

// Download your OAuth2 configuration from the Google
const keys = require('./oauth2.keys.json');

/**
 * Start by acquiring a pre-authenticated oAuth2 client.
 */
async function main() {
  const oAuth2Client = await getAuthenticatedClient();
  // Make a simple request to the People API using our pre-authenticated client. The `request()` method
  // takes an GaxiosOptions object.  Visit https://github.com/JustinBeckwith/gaxios.
  const url = 'https://people.googleapis.com/v1/people/me?personFields=names';
  const res = await oAuth2Client.request({url});
  console.log(res.data);

  // After acquiring an access_token, you may want to check on the audience, expiration,
  // or original scopes requested.  You can do that with the `getTokenInfo` method.
  const tokenInfo = await oAuth2Client.getTokenInfo(
    oAuth2Client.credentials.access_token
  );
  console.log(tokenInfo);
}

/**
 * Create a new OAuth2Client, and go through the OAuth2 content
 * workflow.  Return the full client to the callback.
 */
function getAuthenticatedClient() {
  return new Promise((resolve, reject) => {
    // create an oAuth client to authorize the API call.  Secrets are kept in a `keys.json` file,
    // which should be downloaded from the Google Developers Console.
    const oAuth2Client = new OAuth2Client(
      keys.web.client_id,
      keys.web.client_secret,
      keys.web.redirect_uris[0]
    );

    // Generate the url that will be used for the consent dialog.
    const authorizeUrl = oAuth2Client.generateAuthUrl({
      access_type: 'offline',
      scope: 'https://www.googleapis.com/auth/userinfo.profile',
    });

    // Open an http server to accept the oauth callback. In this simple example, the
    // only request to our webserver is to /oauth2callback?code=<code>
    const server = http
      .createServer(async (req, res) => {
        try {
          if (req.url.indexOf('/oauth2callback') > -1) {
            // acquire the code from the querystring, and close the web server.
            const qs = new url.URL(req.url, 'http://localhost:3000')
              .searchParams;
            const code = qs.get('code');
            console.log(`Code is ${code}`);
            res.end('Authentication successful! Please return to the console.');
            server.destroy();

            // Now that we have the code, use that to acquire tokens.
            const r = await oAuth2Client.getToken(code);
            // Make sure to set the credentials on the OAuth2 client.
            oAuth2Client.setCredentials(r.tokens);
            console.info('Tokens acquired.');
            resolve(oAuth2Client);
          }
        } catch (e) {
          reject(e);
        }
      })
      .listen(3000, () => {
        // open the browser to the authorize url to start the workflow
        open(authorizeUrl, {wait: false}).then(cp => cp.unref());
      });
    destroyer(server);
  });
}

main().catch(console.error);

Handling token events

This library will automatically obtain an access_token, and automatically refresh the access_token if a refresh_token is present. The refresh_token is only returned on the first authorization, so if you want to make sure you store it safely. An easy way to make sure you always store the most recent tokens is to use the tokens event:

Retrieve access token

With the code returned, you can ask for an access token as shown below:

Obtaining a new Refresh Token

If you need to obtain a new refresh_token, ensure the call to generateAuthUrl sets the access_type to offline. The refresh token will only be returned for the first authorization by the user. To force consent, set the prompt property to consent:

Checking access_token information

After obtaining and storing an access_token, at a later time you may want to go check the expiration date, original scopes, or audience for the token. To get the token info, you can use the getTokenInfo method:

This method will throw if the token is invalid.

OAuth2 with Installed Apps (Electron)

If you’re authenticating with OAuth2 from an installed application (like Electron), you may not want to embed your client_secret inside of the application sources. To work around this restriction, you can choose the iOS application type when creating your OAuth2 credentials in the Google Developers console:

application type
application type

If using the iOS type, when creating the OAuth2 client you won’t need to pass a client_secret into the constructor:

JSON Web Tokens

The Google Developers Console provides a .json file that you can use to configure a JWT auth client and authenticate your requests, for example when using a service account.

The parameters for the JWT auth client including how to use it with a .pem file are explained in samples/jwt.js.

Loading credentials from environment variables

Instead of loading credentials from a key file, you can also provide them using an environment variable and the GoogleAuth.fromJSON() method. This is particularly convenient for systems that deploy directly from source control (Heroku, App Engine, etc).

Start by exporting your credentials:

$ export CREDS='{
  "type": "service_account",
  "project_id": "your-project-id",
  "private_key_id": "your-private-key-id",
  "private_key": "your-private-key",
  "client_email": "your-client-email",
  "client_id": "your-client-id",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://accounts.google.com/o/oauth2/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "your-cert-url"
}'

Now you can create a new client from the credentials:

Using a Proxy

You can set the HTTPS_PROXY or https_proxy environment variables to proxy HTTPS requests. When HTTPS_PROXY or https_proxy are set, they will be used to proxy SSL requests that do not have an explicit proxy configuration option present.

Compute

If your application is running on Google Cloud Platform, you can authenticate using the default service account or by specifying a specific service account.

Note: In most cases, you will want to use Application Default Credentials. Direct use of the Compute class is for very specific scenarios.

Working with ID Tokens

Fetching ID Tokens

If your application is running on Cloud Run or Cloud Functions, or using Cloud Identity-Aware Proxy (IAP), you will need to fetch an ID token to access your application. For this, use the method getIdTokenClient on the GoogleAuth client.

For invoking Cloud Run services, your service account will need the Cloud Run Invoker IAM permission.

For invoking Cloud Functions, your service account will need the Function Invoker IAM permission.

A complete example can be found in samples/idtokens-serverless.js.

For invoking Cloud Identity-Aware Proxy, you will need to pass the Client ID used when you set up your protected resource as the target audience.

A complete example can be found in samples/idtokens-iap.js.

Verifying ID Tokens

If you’ve secured your IAP app with signed headers, you can use this library to verify the IAP header:

A complete example can be found in samples/verifyIdToken-iap.js.

Questions/problems?

Contributing

See CONTRIBUTING.



eslint-plugin-import

build status Coverage Status win32 build status npm npm downloads

This plugin intends to support linting of ES2015+ (ES6+) import/export syntax, and prevent issues with misspelling of file paths and import names. All the goodness that the ES2015+ static module syntax intends to provide, marked up in your editor.

IF YOU ARE USING THIS WITH SUBLIME: see the bottom section for important info.

Rules

Static analysis

Helpful warnings

Module systems

  • Report potentially ambiguous parse goal (script vs. module) (unambiguous)
  • Report CommonJS require calls and module.exports or exports.*. (no-commonjs)
  • Report AMD require and define calls. (no-amd)
  • No Node.js builtin modules. (no-nodejs-modules)

Style guide

eslint-plugin-import for enterprise

Available as part of the Tidelift Subscription.

The maintainers of eslint-plugin-import and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.

Installation

or if you manage ESLint as a dev dependency:

All rules are off by default. However, you may configure them manually in your .eslintrc.(yml|json|js), or extend one of the canned configs:



TypeScript

You may use the following shortcut or assemble your own config using the granular settings described below.

Make sure you have installed @typescript-eslint/parser which is used in the following configuration. Unfortunately NPM does not allow to list optional peer dependencies.



Resolvers

With the advent of module bundlers and the current state of modules and module syntax specs, it’s not always obvious where import x from 'module' should look to find the file behind module.

Up through v0.10ish, this plugin has directly used substack’s resolve plugin, which implements Node’s import behavior. This works pretty well in most cases.

However, webpack allows a number of things in import module source strings that Node does not, such as loaders (import 'file!./whatever') and a number of aliasing schemes, such as externals: mapping a module id to a global name at runtime (allowing some modules to be included more traditionally via script tags).

In the interest of supporting both of these, v0.11 introduces resolvers.

Currently Node and webpack resolution have been implemented, but the resolvers are just npm packages, so third party packages are supported (and encouraged!).

You can reference resolvers in several ways (in order of precedence):

  • as a conventional eslint-import-resolver name, like eslint-import-resolver-foo:
  • with a full npm module name, like my-awesome-npm-module:
  • with a filesystem path to resolver, defined in this example as a computed property name:

Relative paths will be resolved relative to the source’s nearest package.json or the process’s current working directory if no package.json is found.

If you are interesting in writing a resolver, see the spec for more details.



Settings

You may set the following settings in your .eslintrc:

import/extensions

A list of file extensions that will be parsed as modules and inspected for exports.

This defaults to ['.js'], unless you are using the react shared config, in which case it is specified as ['.js', '.jsx'].

If you require more granular extension definitions, you can use:

Note that this is different from (and likely a subset of) any import/resolver extensions settings, which may include .json, .coffee, etc. which will still factor into the no-unresolved rule.

Also, the following import/ignore patterns will overrule this list.

import/ignore

A list of regex strings that, if matched by a path, will not report the matching module if no exports are found. In practice, this means rules other than no-unresolved will not report on any imports with (absolute filesystem) paths matching this pattern.

no-unresolved has its own ignore setting.

import/core-modules

An array of additional modules to consider as “core” modules–modules that should be considered resolved but have no path on the filesystem. Your resolver may already define some of these (for example, the Node resolver knows about fs and path), so you need not redefine those.

For example, Electron exposes an electron module:

that would otherwise be unresolved. To avoid this, you may provide electron as a core module:

In Electron’s specific case, there is a shared config named electron that specifies this for you.

Contribution of more such shared configs for other platforms are welcome!

import/external-module-folders

An array of folders. Resolved modules only from those folders will be considered as “external”. By default - ["node_modules"]. Makes sense if you have configured your path or webpack to handle your internal paths differently and want to consider modules from some folders, for example bower_components or jspm_modules, as “external”.

This option is also useful in a monorepo setup: list here all directories that contain monorepo’s packages and they will be treated as external ones no matter which resolver is used.

Each item in this array is either a folder’s name, its subpath, or its absolute prefix path:

  • jspm_modules will match any file or folder named jspm_modules or which has a direct or non-direct parent named jspm_modules, e.g. /home/me/project/jspm_modules or /home/me/project/jspm_modules/some-pkg/index.js.

  • packages/core will match any path that contains these two segments, for example /home/me/project/packages/core/src/utils.js.

  • /home/me/project/packages will only match files and directories inside this directory, and the directory itself.

Please note that incomplete names are not allowed here so components won’t match bower_components and packages/ui won’t match packages/ui-utils (but will match packages/ui/utils).

import/parsers

A map from parsers to file extension arrays. If a file extension is matched, the dependency parser will require and use the map key as the parser instead of the configured ESLint parser. This is useful if you’re inter-op-ing with TypeScript directly using webpack, for example:

In this case, @typescript-eslint/parser must be installed and require-able from the running eslint module’s location (i.e., install it as a peer of ESLint).

This is currently only tested with @typescript-eslint/parser (and its predecessor, typescript-eslint-parser) but should theoretically work with any moderately ESTree-compliant parser.

It’s difficult to say how well various plugin features will be supported, too, depending on how far down the rabbit hole goes. Submit an issue if you find strange behavior beyond here, but steel your heart against the likely outcome of closing with wontfix.

import/resolver

See resolvers.

import/cache

Settings for cache behavior. Memoization is used at various levels to avoid the copious amount of fs.statSync/module parse calls required to correctly report errors.

For normal eslint console runs, the cache lifetime is irrelevant, as we can strongly assume that files should not be changing during the lifetime of the linter process (and thus, the cache in memory)

For long-lasting processes, like eslint_d or eslint-loader, however, it’s important that there be some notion of staleness.

If you never use eslint_d or eslint-loader, you may set the cache lifetime to Infinity and everything should be fine:

Otherwise, set some integer, and cache entries will be evicted after that many seconds have elapsed:

import/internal-regex

A regex for packages should be treated as internal. Useful when you are utilizing a monorepo setup or developing a set of packages that depend on each other.

By default, any package referenced from import/external-module-folders will be considered as “external”, including packages in a monorepo like yarn workspace or lerna environment. If you want to mark these packages as “internal” this will be useful.

For example, if your packages in a monorepo are all in @scope, you can configure import/internal-regex like this

SublimeLinter-eslint

SublimeLinter-eslint introduced a change to support .eslintignore files which altered the way file paths are passed to ESLint when linting during editing. This change sends a relative path instead of the absolute path to the file (as ESLint normally provides), which can make it impossible for this plugin to resolve dependencies on the filesystem.

This workaround should no longer be necessary with the release of ESLint 2.0, when .eslintignore will be updated to work more like a .gitignore, which should support proper ignoring of absolute paths via --stdin-filename.

In the meantime, see roadhump/SublimeLinter-eslint#58 for more details and discussion, but essentially, you may find you need to add the following SublimeLinter config to your Sublime project file:

Note that ${project}/code matches the code provided at folders[0].path.

The purpose of the chdir setting, in this case, is to set the working directory from which ESLint is executed to be the same as the directory on which SublimeLinter-eslint bases the relative path it provides.

See the SublimeLinter docs on chdir for more information, in case this does not work with your project.

If you are not using .eslintignore, or don’t have a Sublime project file, you can also do the following via a .sublimelinterrc file in some ancestor directory of your code:

I also found that I needed to set rc_search_limit to null, which removes the file hierarchy search limit when looking up the directory tree for .sublimelinterrc:

In Package Settings / SublimeLinter / User Settings:

I believe this defaults to 3, so you may not need to alter it depending on your project folder max depth.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



node-fetch

npm version build status coverage status install size Discord

A light-weight module that brings window.fetch to Node.js

(We are looking for v2 maintainers and collaborators)

Backers

Motivation

Instead of implementing XMLHttpRequest in Node.js to run browser-specific Fetch polyfill, why not go from native http to fetch API directly? Hence, node-fetch, minimal code for a window.fetch compatible API on Node.js runtime.

See Matt Andrews’ isomorphic-fetch or Leonardo Quixada’s cross-fetch for isomorphic usage (exports node-fetch for server-side, whatwg-fetch for client-side).

Features

  • Stay consistent with window.fetch API.
  • Make conscious trade-off when following WHATWG fetch spec and stream spec implementation details, document known differences.
  • Use native promise but allow substituting it with [insert your favorite promise library].
  • Use native Node streams for body on both request and response.
  • Decode content encoding (gzip/deflate) properly and convert string output (such as res.text() and res.json()) to UTF-8 automatically.
  • Useful extensions such as timeout, redirect limit, response size limit, explicit errors for troubleshooting.

Difference from client-side fetch

  • If you happen to use a missing feature that window.fetch offers, feel free to open an issue.
  • Pull requests are welcomed too!

Installation

Current stable release (2.x)

Loading and configuring the module

We suggest you load the module via require until the stabilization of ES modules in node:

If you are using a Promise library other than native, set it through fetch.Promise:

Common Usage

NOTE: The documentation below is up-to-date with 2.x releases; see the 1.x readme, changelog and 2.x upgrade guide for the differences.

Plain text or HTML

JSON

Simple Post

Post with JSON

Post with form parameters

URLSearchParams is available in Node.js as of v7.5.0. See official documentation for more usage methods.

NOTE: The Content-Type header is only set automatically to x-www-form-urlencoded when an instance of URLSearchParams is given as such:

Handling exceptions

NOTE: 3xx-5xx responses are NOT exceptions and should be handled in then(); see the next section for more information.

Adding a catch to the fetch promise chain will catch all exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the error handling document for more details.

Handling client and server errors

It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses:

Advanced Usage

Streams

The “Node.js way” is to use streams when possible:

Buffer

If you prefer to cache binary data in full, use buffer(). (NOTE: buffer() is a node-fetch-only API)

Accessing Headers and other Meta data

Unlike browsers, you can access raw Set-Cookie headers manually using Headers.raw(). This is a node-fetch only API.

Post data using a file stream

Post with form-data (detect multipart)

Request cancellation with AbortSignal

NOTE: You may cancel streamed requests only on Node >= v8.0.0

You may cancel requests with AbortController. A suggested implementation is abort-controller.

An example of timing out a request after 150ms could be achieved as the following:

See test cases for more examples.

API

fetch(url[, options])

  • url A string representing the URL for fetching
  • options Options for the HTTP(S) request
  • Returns: Promise<Response>

Perform an HTTP(S) fetch.

url should be an absolute url, such as https://example.com/. A path-relative URL (/file/under/root) or protocol-relative URL (//can-be-http-or-https.com/) will result in a rejected Promise.

### Options

The default values are shown after each option key.

Default Headers

If no values are set, the following request headers will be sent automatically:

Header Value
Accept-Encoding gzip,deflate (when options.compress === true)
Accept */*
Connection close (when no options.agent is present)
Content-Length (automatically calculated, if possible)
Transfer-Encoding chunked (when req.body is a stream)
User-Agent node-fetch/1.0 (+https://github.com/bitinn/node-fetch)

Note: when body is a Stream, Content-Length is not set automatically.

Custom Agent

The agent option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following:

  • Use only IPv4 or IPv6
  • Custom DNS Lookup

See http.Agent for more information.

In addition, the agent option accepts a function that returns http(s).Agent instance given current URL, this is useful during a redirection chain across HTTP and HTTPS protocol.

### Class: Request

An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the Body interface.

Due to the nature of Node.js, the following properties are not implemented at this moment:

  • type
  • destination
  • referrer
  • referrerPolicy
  • mode
  • credentials
  • cache
  • integrity
  • keepalive

The following node-fetch extension properties are provided:

  • follow
  • compress
  • counter
  • agent

See options for exact meaning of these extensions.

new Request(input[, options])

(spec-compliant)

  • input A string representing a URL, or another Request (which will be cloned)
  • options [Options][#fetch-options] for the HTTP(S) request

Constructs a new Request object. The constructor is identical to that in the browser.

In most cases, directly fetch(url, options) is simpler than creating a Request object.

### Class: Response

An HTTP(S) response. This class implements the Body interface.

The following properties are not implemented in node-fetch at this moment:

  • Response.error()
  • Response.redirect()
  • type
  • trailer

new Response([body[, options]])

(spec-compliant)

Constructs a new Response object. The constructor is identical to that in the browser.

Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a Response directly.

response.ok

(spec-compliant)

Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300.

response.redirected

(spec-compliant)

Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0.

### Class: Headers

This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the Fetch Standard are implemented.

new Headers([init])

(spec-compliant)

  • init Optional argument to pre-fill the Headers object

Construct a new Headers object. init can be either null, a Headers object, an key-value map object or any iterable object.

### Interface: Body

Body is an abstract interface with methods that are applicable to both Request and Response classes.

The following methods are not yet implemented in node-fetch at this moment:

  • formData()

body.body

(deviation from spec)

Data are encapsulated in the Body object. Note that while the Fetch Standard requires the property to always be a WHATWG ReadableStream, in node-fetch it is a Node.js Readable stream.

body.bodyUsed

(spec-compliant)

  • Boolean

A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again.

body.arrayBuffer()

body.blob()

body.json()

body.text()

(spec-compliant)

  • Returns: Promise

Consume the body and return a promise that will resolve to one of these formats.

body.buffer()

(node-fetch extension)

  • Returns: Promise<Buffer>

Consume the body and return a promise that will resolve to a Buffer.

body.textConverted()

(node-fetch extension)

  • Returns: Promise<String>

Identical to body.text(), except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible.

(This API requires an optional dependency of the npm package encoding, which you need to install manually. webpack users may see a warning message due to this optional dependency.)

### Class: FetchError

(node-fetch extension)

An operational error in the fetching process. See ERROR-HANDLING.md for more info.

### Class: AbortError

(node-fetch extension)

An Error thrown when the request is aborted in response to an AbortSignal’s abort event. It has a name property of AbortError. See ERROR-HANDLING.MD for more info.

Acknowledgement

Thanks to github/fetch for providing a solid implementation reference.

node-fetch v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr).



verror: rich JavaScript errors

This module provides several classes in support of Joyent’s Best Practices for Error Handling in Node.js. If you find any of the behavior here confusing or surprising, check out that document first.

The error classes here support:

  • printf-style arguments for the message
  • chains of causes
  • properties to provide extra information about the error
  • creating your own subclasses that support all of these

The classes here are:

  • VError, for chaining errors while preserving each one’s error message. This is useful in servers and command-line utilities when you want to propagate an error up a call stack, but allow various levels to add their own context. See examples below.
  • WError, for wrapping errors while hiding the lower-level messages from the top-level error. This is useful for API endpoints where you don’t want to expose internal error messages, but you still want to preserve the error chain for logging and debugging.
  • SError, which is just like VError but interprets printf-style arguments more strictly.
  • MultiError, which is just an Error that encapsulates one or more other errors. (This is used for parallel operations that return several errors.)


Quick start

First, install the package:

npm install verror

If nothing else, you can use VError as a drop-in replacement for the built-in JavaScript Error class, with the addition of printf-style messages:

This prints:

missing file: “/etc/passwd”

You can also pass a cause argument, which is any other Error object:

This prints out:

stat “/nonexistent”: ENOENT, stat ‘/nonexistent’

which resembles how Unix programs typically report errors:

$ sort /nonexistent sort: open failed: /nonexistent: No such file or directory

To match the Unixy feel, when you print out the error, just prepend the program’s name to the VError’s message. Or just call node-cmdutil.fail(your_verror), which does this for you.

You can get the next-level Error using err.cause():

prints:

ENOENT, stat ‘/nonexistent’

Of course, you can chain these as many times as you want, and it works with any kind of Error:

This prints:

request failed: failed to stat “/junk”: No such file or directory

The idea is that each layer in the stack annotates the error with a description of what it was doing. The end result is a message that explains what happened at each level.

You can also decorate Error objects with additional information so that callers can not only handle each kind of error differently, but also construct their own error messages (e.g., to localize them, format them, group them by type, and so on). See the example below.



Deeper dive

The two main goals for VError are:

  • Make it easy to construct clear, complete error messages intended for people. Clear error messages greatly improve both user experience and debuggability, so we wanted to make it easy to build them. That’s why the constructor takes printf-style arguments.
  • Make it easy to construct objects with programmatically-accessible metadata (which we call informational properties). Instead of just saying “connection refused while connecting to 192.168.1.2:80”, you can add properties like "ip": "192.168.1.2" and "tcpPort": 80. This can be used for feeding into monitoring systems, analyzing large numbers of Errors (as from a log file), or localizing error messages.

To really make this useful, it also needs to be easy to compose Errors: higher-level code should be able to augment the Errors reported by lower-level code to provide a more complete description of what happened. Instead of saying “connection refused”, you can say “operation X failed: connection refused”. That’s why VError supports causes.

In order for all this to work, programmers need to know that it’s generally safe to wrap lower-level Errors with higher-level ones. If you have existing code that handles Errors produced by a library, you should be able to wrap those Errors with a VError to add information without breaking the error handling code. There are two obvious ways that this could break such consumers:

  • The error’s name might change. People typically use name to determine what kind of Error they’ve got. To ensure compatibility, you can create VErrors with custom names, but this approach isn’t great because it prevents you from representing complex failures. For this reason, VError provides findCauseByName, which essentially asks: does this Error or any of its causes have this specific type? If error handling code uses findCauseByName, then subsystems can construct very specific causal chains for debuggability and still let people handle simple cases easily. There’s an example below.
  • The error’s properties might change. People often hang additional properties off of Error objects. If we wrap an existing Error in a new Error, those properties would be lost unless we copied them. But there are a variety of both standard and non-standard Error properties that should not be copied in this way: most obviously name, message, and stack, but also fileName, lineNumber, and a few others. Plus, it’s useful for some Error subclasses to have their own private properties – and there’d be no way to know whether these should be copied. For these reasons, VError first-classes these information properties. You have to provide them in the constructor, you can only fetch them with the info() function, and VError takes care of making sure properties from causes wind up in the info() output.

Let’s put this all together with an example from the node-fast RPC library. node-fast implements a simple RPC protocol for Node programs. There’s a server and client interface, and clients make RPC requests to servers. Let’s say the server fails with an UnauthorizedError with message “user ‘bob’ is not authorized”. The client wraps all server errors with a FastServerError. The client also wraps all request errors with a FastRequestError that includes the name of the RPC call being made. The result of this failed RPC might look like this:

name: FastRequestError message: “request failed: server error: user ‘bob’ is not authorized” rpcMsgid: rpcMethod: GetObject cause: name: FastServerError message: “server error: user ‘bob’ is not authorized” cause: name: UnauthorizedError message: “user ‘bob’ is not authorized” rpcUser: “bob”

When the caller uses VError.info(), the information properties are collapsed so that it looks like this:

message: “request failed: server error: user ‘bob’ is not authorized” rpcMsgid: rpcMethod: GetObject rpcUser: “bob”

Taking this apart:

  • The error’s message is a complete description of the problem. The caller can report this directly to its caller, which can potentially make its way back to an end user (if appropriate). It can also be logged.
  • The caller can tell that the request failed on the server, rather than as a result of a client problem (e.g., failure to serialize the request), a transport problem (e.g., failure to connect to the server), or something else (e.g., a timeout). They do this using findCauseByName('FastServerError') rather than checking the name field directly.
  • If the caller logs this error, the logs can be analyzed to aggregate errors by cause, by RPC method name, by user, or whatever. Or the error can be correlated with other events for the same rpcMsgid.
  • It wasn’t very hard for any part of the code to contribute to this Error. Each part of the stack has just a few lines to provide exactly what it knows, with very little boilerplate.

It’s not expected that you’d use these complex forms all the time. Despite supporting the complex case above, you can still just do:

new VError(“my service isn’t working”);

for the simple cases.



Reference: VError, WError, SError

VError, WError, and SError are convenient drop-in replacements for Error that support printf-style arguments, first-class causes, informational properties, and other useful features.

Constructors

The VError constructor has several forms:

All of these forms construct a new VError that behaves just like the built-in JavaScript Error class, with some additional methods described below.

In the first form, options is a plain object with any of the following optional properties:

Option name Type Meaning
name string Describes what kind of error this is. This is intended for programmatic use to distinguish between different kinds of errors. Note that in modern versions of Node.js, this name is ignored in the stack property value, but callers can still use the name property to get at it.
cause any Error object Indicates that the new error was caused by cause. See cause() below. If unspecified, the cause will be null.
strict boolean If true, then null and undefined values in sprintf_args are passed through to sprintf(). Otherwise, these are replaced with the strings 'null', and ‘undefined’, respectively.
constructorOpt function If specified, then the stack trace for this error ends at function constructorOpt. Functions called by constructorOpt will not show up in the stack. This is useful when this class is subclassed.
info object Specifies arbitrary informational properties that are available through the VError.info(err) static class method. See that method for details.

The second form is equivalent to using the first form with the specified cause as the error’s cause. This form is distinguished from the first form because the first argument is an Error.

The third form is equivalent to using the first form with all default option values. This form is distinguished from the other forms because the first argument is not an object or an Error.

The WError constructor is used exactly the same way as the VError constructor. The SError constructor is also used the same way as the VError constructor except that in all cases, the strict property is overriden to `true.

Public properties

VError, WError, and SError all provide the same public properties as JavaScript’s built-in Error objects.

Property name Type Meaning
name string Programmatically-usable name of the error.
message string Human-readable summary of the failure. Programmatically-accessible details are provided through VError.info(err) class method.
stack string Human-readable stack trace where the Error was constructed.

For all of these classes, the printf-style arguments passed to the constructor are processed with sprintf() to form a message. For WError, this becomes the complete message property. For SError and VError, this message is prepended to the message of the cause, if any (with a suitable separator), and the result becomes the message property.

The stack property is managed entirely by the underlying JavaScript implementation. It’s generally implemented using a getter function because constructing the human-readable stack trace is somewhat expensive.

Class methods

The following methods are defined on the VError class and as exported functions on the verror module. They’re defined this way rather than using methods on VError instances so that they can be used on Errors not created with VError.

VError.cause(err)

The cause() function returns the next Error in the cause chain for err, or null if there is no next error. See the cause argument to the constructor. Errors can have arbitrarily long cause chains. You can walk the cause chain by invoking VError.cause(err) on each subsequent return value. If err is not a VError, the cause is null.

VError.info(err)

Returns an object with all of the extra error information that’s been associated with this Error and all of its causes. These are the properties passed in using the info option to the constructor. Properties not specified in the constructor for this Error are implicitly inherited from this error’s cause.

These properties are intended to provide programmatically-accessible metadata about the error. For an error that indicates a failure to resolve a DNS name, informational properties might include the DNS name to be resolved, or even the list of resolvers used to resolve it. The values of these properties should generally be plain objects (i.e., consisting only of null, undefined, numbers, booleans, strings, and objects and arrays containing only other plain objects).

VError.fullStack(err)

Returns a string containing the full stack trace, with all nested errors recursively reported as 'caused by:' + err.stack.

VError.findCauseByName(err, name)

The findCauseByName() function traverses the cause chain for err, looking for an error whose name property matches the passed in name value. If no match is found, null is returned.

If all you want is to know whether there’s a cause (and you don’t care what it is), you can use VError.hasCauseWithName(err, name).

If a vanilla error or a non-VError error is passed in, then there is no cause chain to traverse. In this scenario, the function will check the name property of only err.

VError.hasCauseWithName(err, name)

Returns true if and only if VError.findCauseByName(err, name) would return a non-null value. This essentially determines whether err has any cause in its cause chain that has name name.

VError.errorFromList(errors)

Given an array of Error objects (possibly empty), return a single error representing the whole collection of errors. If the list has:

  • 0 elements, returns null
  • 1 element, returns the sole error
  • more than 1 element, returns a MultiError referencing the whole list

This is useful for cases where an operation may produce any number of errors, and you ultimately want to implement the usual callback(err) pattern. You can accumulate the errors in an array and then invoke callback(VError.errorFromList(errors)) when the operation is complete.

VError.errorForEach(err, func)

Convenience function for iterating an error that may itself be a MultiError.

In all cases, err must be an Error. If err is a MultiError, then func is invoked as func(errorN) for each of the underlying errors of the MultiError. If err is any other kind of error, func is invoked once as func(err). In all cases, func is invoked synchronously.

This is useful for cases where an operation may produce any number of warnings that may be encapsulated with a MultiError – but may not be.

This function does not iterate an error’s cause chain.

Examples

The “Demo” section above covers several basic cases. Here’s a more advanced case:

This outputs:

failed to connect to “127.0.0.1:215”: something bad happened ConnectionError { errno: ‘ECONNREFUSED’, remote_ip: ‘127.0.0.1’, port: 215 } ConnectionError: failed to connect to “127.0.0.1:215”: something bad happened at Object. (/home/dap/node-verror/examples/info.js:5:12) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:935:3

Information properties are inherited up the cause chain, with values at the top of the chain overriding same-named values lower in the chain. To continue that example:

This outputs:

request failed: failed to connect to “127.0.0.1:215”: something bad happened RequestError { errno: ‘EBADREQUEST’, remote_ip: ‘127.0.0.1’, port: 215 } RequestError: request failed: failed to connect to “127.0.0.1:215”: something bad happened at Object. (/home/dap/node-verror/examples/info.js:20:12) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:935:3

You can also print the complete stack trace of combined Errors by using VError.fullStack(err).

This outputs:

VError: something really bad happened here: something bad happened at Object. (/home/dap/node-verror/examples/fullStack.js:5:12) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Function.Module.runMain (module.js:441:10) at startup (node.js:139:18) at node.js:968:3 caused by: VError: something bad happened at Object. (/home/dap/node-verror/examples/fullStack.js:3:12) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Function.Module.runMain (module.js:441:10) at startup (node.js:139:18) at node.js:968:3

VError.fullStack is also safe to use on regular Errors, so feel free to use it whenever you need to extract the stack trace from an Error, regardless if it’s a VError or not.



Reference: MultiError

MultiError is an Error class that represents a group of Errors. This is used when you logically need to provide a single Error, but you want to preserve information about multiple underying Errors. A common case is when you execute several operations in parallel and some of them fail.

MultiErrors are constructed as:

error_list is an array of at least one Error object.

The cause of the MultiError is the first error provided. None of the other VError options are supported. The message for a MultiError consists the message from the first error, prepended with a message indicating that there were other errors.

For example:

outputs:

first of 2 errors: failed to resolve DNS name “abc.example.com”

See the convenience function VError.errorFromList, which is sometimes simpler to use than this constructor.

Public methods

errors()

Returns an array of the errors used to construct this MultiError.



Contributing

See separate contribution guidelines.



braces Donate NPM version NPM monthly downloads NPM total downloads Linux Build Status

Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

v3.0.0 Released!!

See the changelog for details.

Why use braces?

Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters.

  • fast and performant - Starts fast, runs fast and scales well as patterns increase in complexity.
  • Organized code base - The parser and compiler are easy to maintain and update when edge cases crop up.
  • Well-tested - Thousands of test assertions, and passes all of the Bash, minimatch, and brace-expansion unit tests (as of the date this was written).
  • Safer - You shouldn’t have to worry about users defining aggressive or malicious brace patterns that can break your application. Braces takes measures to prevent malicious regex that can be used for DDoS attacks (see catastrophic backtracking).

Usage

The main export is a function that takes one or more brace patterns and options.

Brace Expansion vs. Compilation

By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching.

Compiled

Expanded

Enable brace expansion by setting the expand option to true, or by using braces.expand() (returns an array similar to what you’d expect from Bash, or echo {1..5}, or minimatch):

Lists

Expand lists (like Bash “sets”):

Sequences

Expand ranges of characters (like Bash “sequences”):

See fill-range for all available range-expansion options.

Steppped ranges

Steps, or increments, may be used with ranges:

When the .optimize method is used, or options.optimize is set to true, sequences are passed to to-regex-range for expansion.

Nesting

Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved.

“Expanded” braces

“Optimized” braces

Escaping

Escaping braces

A brace pattern will not be expanded or evaluted if either the opening or closing brace is escaped:

Escaping commas

Commas inside braces may also be escaped:

Single items

Following bash conventions, a brace pattern is also not expanded when it contains a single character:

Options

options.maxLength

Type: Number

Default: 65,536

Description: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera.

options.expand

Type: Boolean

Default: undefined

Description: Generate an “expanded” brace pattern (alternatively you can use the braces.expand() method, which does the same thing).

options.nodupes

Type: Boolean

Default: undefined

Description: Remove duplicates from the returned array.

options.rangeLimit

Type: Number

Default: 1000

Description: To prevent malicious patterns from being passed by users, an error is thrown when braces.expand() is used or options.expand is true and the generated range will exceed the rangeLimit.

You can customize options.rangeLimit or set it to Inifinity to disable this altogether.

Examples

options.transform

Type: Function

Default: undefined

Description: Customize range expansion.

Example: Transforming non-numeric values

Example: Transforming numeric values

options.quantifiers

Type: Boolean

Default: undefined

Description: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, a{1,3} will match the letter a one to three times.

Unfortunately, regex quantifiers happen to share the same syntax as Bash lists

The quantifiers option tells braces to detect when regex quantifiers are defined in the given pattern, and not to try to expand them as lists.

Examples

options.unescape

Type: Boolean

Default: undefined

Description: Strip backslashes that were used for escaping from the result.

What is “brace expansion”?

Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs).

In addition to “expansion”, braces are also used for matching. In other words:

More about brace expansion (click to expand)

There are two main types of brace expansion:

  1. lists: which are defined using comma-separated values inside curly braces: {a,b,c}
  2. sequences: which are defined using a starting value and an ending value, separated by two dots: a{1..3}b. Optionally, a third argument may be passed to define a “step” or increment to use: a{1..100..10}b. These are also sometimes referred to as “ranges”.

Here are some example brace patterns to illustrate how they work:

Sets

{a,b,c}       => a b c
{a,b,c}{1,2}  => a1 a2 b1 b2 c1 c2

Sequences

{1..9}        => 1 2 3 4 5 6 7 8 9
{4..-4}       => 4 3 2 1 0 -1 -2 -3 -4
{1..20..3}    => 1 4 7 10 13 16 19
{a..j}        => a b c d e f g h i j
{j..a}        => j i h g f e d c b a
{a..z..3}     => a d g j m p s v y

Combination

Sets and sequences can be mixed together or used along with any other strings.

{a,b,c}{1..3}   => a1 a2 a3 b1 b2 b3 c1 c2 c3
foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar

The fact that braces can be “expanded” from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases.

Brace matching

In addition to expansion, brace patterns are also useful for performing regular-expression-like matching.

For example, the pattern foo/{1..3}/bar would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar

But not:

baz/1/qux
baz/2/qux
baz/3/qux

Braces can also be combined with glob patterns to perform more advanced wildcard matching. For example, the pattern */{1..3}/* would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux

Brace matching pitfalls

Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of.

tldr

“brace bombs”

  • brace expansion can eat up a huge amount of processing resources
  • as brace patterns increase linearly in size, the system resources required to expand the pattern increase exponentially
  • users can accidentally (or intentially) exhaust your system’s resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!)

For a more detailed explanation with examples, see the geometric complexity section.

The solution

Jump to the performance section to see how Braces solves this problem in comparison to other libraries.

Geometric complexity

At minimum, brace patterns with sets limited to two elements have quadradic or O(n^2) complexity. But the complexity of the algorithm increases exponentially as the number of sets, and elements per set, increases, which is O(n^c).

For example, the following sets demonstrate quadratic (O(n^2)) complexity:

{1,2}{3,4}      => (2X2)    => 13 14 23 24
{1,2}{3,4}{5,6} => (2X2X2)  => 135 136 145 146 235 236 245 246

But add an element to a set, and we get a n-fold Cartesian product with O(n^c) complexity:

{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 
                                    249 257 258 259 267 268 269 347 348 349 357 
                                    358 359 367 368 369

Now, imagine how this complexity grows given that each element is a n-tuple:

{1..100}{1..100}         => (100X100)     => 10,000 elements (38.4 kB)
{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB)

Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control.

More information

Interested in learning more about brace expansion?

Performance

Braces is not only screaming fast, it’s also more accurate the other brace expansion libraries.

Better algorithms

Fortunately there is a solution to the “brace bomb” problem: don’t expand brace patterns into an array when they’re used for matching.

Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently.

The proof is in the numbers

Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using braces() and minimatch.braceExpand(), respectively.

Pattern braces minimatch
{1..9007199254740991}[^1] 298 B (5ms 459μs) N/A (freezes)
{1..1000000000000000} 41 B (1ms 15μs) N/A (freezes)
{1..100000000000000} 40 B (890μs) N/A (freezes)
{1..10000000000000} 39 B (2ms 49μs) N/A (freezes)
{1..1000000000000} 38 B (608μs) N/A (freezes)
{1..100000000000} 37 B (397μs) N/A (freezes)
{1..10000000000} 35 B (983μs) N/A (freezes)
{1..1000000000} 34 B (798μs) N/A (freezes)
{1..100000000} 33 B (733μs) N/A (freezes)
{1..10000000} 32 B (5ms 632μs) 78.89 MB (16s 388ms 569μs)
{1..1000000} 31 B (1ms 381μs) 6.89 MB (1s 496ms 887μs)
{1..100000} 30 B (950μs) 588.89 kB (146ms 921μs)
{1..10000} 29 B (1ms 114μs) 48.89 kB (14ms 187μs)
{1..1000} 28 B (760μs) 3.89 kB (1ms 453μs)
{1..100} 22 B (345μs) 291 B (196μs)
{1..10} 10 B (533μs) 20 B (37μs)
{1..3} 7 B (190μs) 5 B (27μs)

Faster algorithms

When you need expansion, braces is still much faster.

(the following results were generated using braces.expand() and minimatch.braceExpand(), respectively)

Pattern braces minimatch
{1..10000000} 78.89 MB (2s 698ms 642μs) 78.89 MB (18s 601ms 974μs)
{1..1000000} 6.89 MB (458ms 576μs) 6.89 MB (1s 491ms 621μs)
{1..100000} 588.89 kB (20ms 728μs) 588.89 kB (156ms 919μs)
{1..10000} 48.89 kB (2ms 202μs) 48.89 kB (13ms 641μs)
{1..1000} 3.89 kB (1ms 796μs) 3.89 kB (1ms 958μs)
{1..100} 291 B (424μs) 291 B (211μs)
{1..10} 20 B (487μs) 20 B (72μs)
{1..3} 5 B (166μs) 5 B (27μs)

If you’d like to run these comparisons yourself, see test/support/generate.js.

Benchmarks

Running benchmarks

Install dev dependencies:

Latest results

Braces is more accurate, without sacrificing performance.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Commits Contributor
197 jonschlinkert
4 doowb
1 es128
1 eush77
1 hemanth
1 wtgtybhertgeghgtwtg

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.8.0, on April 08, 2019. # snapdragon-util NPM version NPM monthly downloads NPM total downloads Linux Build Status

Utilities for the snapdragon parser/compiler.

Table of Contents

Install

Install with npm:

Install with yarn:

Usage

API

.isNode

Returns true if the given value is a node.

Params

Example

.noop

Emit an empty string for the given node.

Params

Example

.identity

Appdend node.val to compiler.output, exactly as it was created by the parser.

Params

Example

.append

Previously named .emit, this method appends the given val to compiler.output for the given node. Useful when you know what value should be appended advance, regardless of the actual value of node.val.

Params

  • node {Object}: Instance of snapdragon-node
  • returns {Function}: Returns a compiler middleware function.

Example

.toNoop

Used in compiler middleware, this onverts an AST node into an empty text node and deletes node.nodes if it exists. The advantage of this method is that, as opposed to completely removing the node, indices will not need to be re-calculated in sibling nodes, and nothing is appended to the output.

Params

  • node {Object}: Instance of snapdragon-node
  • nodes {Array}: Optionally pass a new nodes value, to replace the existing node.nodes array.

Example

.visit

Visit node with the given fn. The built-in .visit method in snapdragon automatically calls registered compilers, this allows you to pass a visitor function.

Params

  • node {Object}: Instance of snapdragon-node
  • fn {Function}
  • returns {Object}: returns the node after recursively visiting all child nodes.

Example

.mapVisit

Map visit the given fn over node.nodes. This is called by visit, use this method if you do not want fn to be called on the first node.

Params

  • node {Object}: Instance of snapdragon-node
  • options {Object}
  • fn {Function}
  • returns {Object}: returns the node

Example

.addOpen

Unshift an *.open node onto node.nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • Node {Function}: (required) Node constructor function from snapdragon-node.
  • filter {Function}: Optionaly specify a filter function to exclude the node.
  • returns {Object}: Returns the created opening node.

Example

.addClose

Push a *.close node onto node.nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • Node {Function}: (required) Node constructor function from snapdragon-node.
  • filter {Function}: Optionaly specify a filter function to exclude the node.
  • returns {Object}: Returns the created closing node.

Example

.wrapNodes

Wraps the given node with *.open and *.close nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • Node {Function}: (required) Node constructor function from snapdragon-node.
  • filter {Function}: Optionaly specify a filter function to exclude the node.
  • returns {Object}: Returns the node

.pushNode

Push the given node onto parent.nodes, and set parent as `node.parent.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Object}: Returns the child node

Example

.unshiftNode

Unshift node onto parent.nodes, and set parent as `node.parent.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {undefined}

Example

.popNode

Pop the last node off of parent.nodes. The advantage of using this method is that it checks for node.nodes and works with any version of snapdragon-node.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Number|Undefined}: Returns the length of node.nodes or undefined.

Example

.shiftNode

Shift the first node off of parent.nodes. The advantage of using this method is that it checks for node.nodes and works with any version of snapdragon-node.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Number|Undefined}: Returns the length of node.nodes or undefined.

Example

.removeNode

Remove the specified node from parent.nodes.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Object|undefined}: Returns the removed node, if successful, or undefined if it does not exist on parent.nodes.

Example

.isType

Returns true if node.type matches the given type. Throws a TypeError if node is not an instance of Node.

Params

  • node {Object}: Instance of snapdragon-node
  • type {String}
  • returns {Boolean}

Example

.hasType

Returns true if the given node has the given type in node.nodes. Throws a TypeError if node is not an instance of Node.

Params

  • node {Object}: Instance of snapdragon-node
  • type {String}
  • returns {Boolean}

Example

.firstOfType

Returns the first node from node.nodes of the given type

Params

  • nodes {Array}
  • type {String}
  • returns {Object|undefined}: Returns the first matching node or undefined.

Example

.findNode

Returns the node at the specified index, or the first node of the given type from node.nodes.

Params

  • nodes {Array}
  • type {String|Number}: Node type or index.
  • returns {Object}: Returns a node or undefined.

Example

.isOpen

Returns true if the given node is an "*.open" node.

Params

Example

.isClose

Returns true if the given node is a "*.close" node.

Params

Example

.hasOpen

Returns true if node.nodes has an .open node

Params

Example

.hasClose

Returns true if node.nodes has a .close node

Params

Example

.hasOpenAndClose

Returns true if node.nodes has both .open and .close nodes

Params

Example

.addType

Push the given node onto the state.inside array for the given type. This array is used as a specialized “stack” for only the given node.type.

Params

  • state {Object}: The compiler.state object or custom state object.
  • node {Object}: Instance of snapdragon-node
  • returns {Array}: Returns the state.inside stack for the given type.

Example

.removeType

Remove the given node from the state.inside array for the given type. This array is used as a specialized “stack” for only the given node.type.

Params

  • state {Object}: The compiler.state object or custom state object.
  • node {Object}: Instance of snapdragon-node
  • returns {Array}: Returns the state.inside stack for the given type.

Example

.isEmpty

Returns true if node.val is an empty string, or node.nodes does not contain any non-empty text nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • fn {Function}
  • returns {Boolean}

Example

.isInsideType

Returns true if the state.inside stack for the given type exists and has one or more nodes on it.

Params

  • state {Object}
  • type {String}
  • returns {Boolean}

Example

.isInside

Returns true if node is either a child or grand-child of the given type, or state.inside[type] is a non-empty array.

Params

  • state {Object}: Either the compiler.state object, if it exists, or a user-supplied state object.
  • node {Object}: Instance of snapdragon-node
  • type {String}: The node.type to check for.
  • returns {Boolean}

Example

.last

Get the last n element from the given array. Used for getting a node from node.nodes.

Params

  • array {Array}
  • n {Number}
  • returns {undefined}

.arrayify

Cast the given val to an array.

Params

  • val {any}
  • returns {Array}

Example

.stringify

Convert the given val to a string by joining with ,. Useful for creating a cheerio/CSS/DOM-style selector from a list of strings.

Params

  • val {any}
  • returns {Array}

.trim

Ensure that the given value is a string and call .trim() on it, or return an empty string.

Params

  • str {String}
  • returns {String}

Release history

Changelog entries are classified using the following labels from keep-a-changelog:

  • added: for new features
  • changed: for changes in existing functionality
  • deprecated: for once-stable features removed in upcoming releases
  • removed: for deprecated features removed in this release
  • fixed: for any bug fixes

Custom labels used in this changelog:

  • dependencies: bumps dependencies
  • housekeeping: code re-organization, minor edits, or other changes that don’t fit in one of the other categories.

[3.0.0] - 2017-05-01

Changed

  • .emit was renamed to .append
  • .addNode was renamed to .pushNode
  • .getNode was renamed to .findNode
  • .isEmptyNodes was renamed to .isEmpty: also now works with node.nodes and/or node.val

Added

[0.1.0]

First release.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on May 01, 2017. semver(1) – The semantic versioner for npm ===========================================

Install

Usage

As a node module:

You can also just load the module for the function that you care about, if you’d like to minimize your footprint.

// load the whole API at once in a single object
const semver = require('semver')

// or just load the bits you need
// all of them listed here, just pick and choose what you want

// classes
const SemVer = require('semver/classes/semver')
const Comparator = require('semver/classes/comparator')
const Range = require('semver/classes/range')

// functions for working with versions
const semverParse = require('semver/functions/parse')
const semverValid = require('semver/functions/valid')
const semverClean = require('semver/functions/clean')
const semverInc = require('semver/functions/inc')
const semverDiff = require('semver/functions/diff')
const semverMajor = require('semver/functions/major')
const semverMinor = require('semver/functions/minor')
const semverPatch = require('semver/functions/patch')
const semverPrerelease = require('semver/functions/prerelease')
const semverCompare = require('semver/functions/compare')
const semverRcompare = require('semver/functions/rcompare')
const semverCompareLoose = require('semver/functions/compare-loose')
const semverCompareBuild = require('semver/functions/compare-build')
const semverSort = require('semver/functions/sort')
const semverRsort = require('semver/functions/rsort')

// low-level comparators between versions
const semverGt = require('semver/functions/gt')
const semverLt = require('semver/functions/lt')
const semverEq = require('semver/functions/eq')
const semverNeq = require('semver/functions/neq')
const semverGte = require('semver/functions/gte')
const semverLte = require('semver/functions/lte')
const semverCmp = require('semver/functions/cmp')
const semverCoerce = require('semver/functions/coerce')

// working with ranges
const semverSatisfies = require('semver/functions/satisfies')
const semverMaxSatisfying = require('semver/ranges/max-satisfying')
const semverMinSatisfying = require('semver/ranges/min-satisfying')
const semverToComparators = require('semver/ranges/to-comparators')
const semverMinVersion = require('semver/ranges/min-version')
const semverValidRange = require('semver/ranges/valid')
const semverOutside = require('semver/ranges/outside')
const semverGtr = require('semver/ranges/gtr')
const semverLtr = require('semver/ranges/ltr')
const semverIntersects = require('semver/ranges/intersects')
const simplifyRange = require('semver/ranges/simplify')
const rangeSubset = require('semver/ranges/subset')

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

--rtl
        Coerce version strings right to left

--ltr
        Coerce version strings left to right (default)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0-0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0-0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0-0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0-0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0-0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0-0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0-0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0-0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0-0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0-0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0-0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0-0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0-0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0-0
  • ^0.2.3 := >=0.2.3 <0.3.0-0
  • ^0.0.3 := >=0.0.3 <0.0.4-0
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0-0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4-0 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0-0
  • ^0.0.x := >=0.0.0 <0.1.0-0
  • ^0.0 := >=0.0.0 <0.1.0-0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0-0
  • ^0.x := >=0.0.0 <1.0.0-0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect
  • simplifyRange(versions, range): Return a “simplified” range that matches the same items in versions list as the range specified. Note that it does not guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with || programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned.
  • subset(subRange, superRange): Return true if the subRange range is entirely contained by the superRange range.

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version, options): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).

If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.

Clean

  • clean(version): Clean a string to be a valid semver if possible

This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.

ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null

Exported Modules

You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main require('semver') export uses getter functions to lazily load the parts of the API that are used.

The following modules are available:

  • require('semver')
  • require('semver/classes')
  • require('semver/classes/comparator')
  • require('semver/classes/range')
  • require('semver/classes/semver')
  • require('semver/functions/clean')
  • require('semver/functions/cmp')
  • require('semver/functions/coerce')
  • require('semver/functions/compare')
  • require('semver/functions/compare-build')
  • require('semver/functions/compare-loose')
  • require('semver/functions/diff')
  • require('semver/functions/eq')
  • require('semver/functions/gt')
  • require('semver/functions/gte')
  • require('semver/functions/inc')
  • require('semver/functions/lt')
  • require('semver/functions/lte')
  • require('semver/functions/major')
  • require('semver/functions/minor')
  • require('semver/functions/neq')
  • require('semver/functions/parse')
  • require('semver/functions/patch')
  • require('semver/functions/prerelease')
  • require('semver/functions/rcompare')
  • require('semver/functions/rsort')
  • require('semver/functions/satisfies')
  • require('semver/functions/sort')
  • require('semver/functions/valid')
  • require('semver/ranges/gtr')
  • require('semver/ranges/intersects')
  • require('semver/ranges/ltr')
  • require('semver/ranges/max-satisfying')
  • require('semver/ranges/min-satisfying')
  • require('semver/ranges/min-version')
  • require('semver/ranges/outside')
  • require('semver/ranges/to-comparators')
  • require('semver/ranges/valid')


fast-glob

It’s a very fast and efficient glob library for Node.js.

This package provides methods for traversing the file system and returning pathnames that matched a defined set of a specified pattern according to the rules used by the Unix Bash shell with some simplifications, meanwhile results are returned in arbitrary order. Quick, simple, effective.

Table of Contents

Details

Highlights

  • Fast. Probably the fastest.
  • Synchronous, Promise and Stream API.
  • Object mode. Can return more than just strings.
  • Error-tolerant.

Donation

Donate

Old and modern mode

This package works in two modes, depending on the environment in which it is used.

  • Old mode. Node.js below 10.10 or when the stats option is enabled.
  • Modern mode. Node.js 10.10+ and the stats option is disabled.

The modern mode is faster. Learn more about the internal mechanism.

Pattern syntax

:warning: Always use forward-slashes in glob expressions (patterns and ignore option). Use backslashes for escaping characters.

There is more than one form of syntax: basic and advanced. Below is a brief overview of the supported features. Also pay attention to our FAQ.

:book: This package uses a micromatch as a library for pattern matching.

Basic syntax

  • An asterisk (*) — matches everything except slashes (path separators), hidden files (names starting with .).
  • A double star or globstar (**) — matches zero or more directories.
  • Question mark (?) – matches any single character except slashes (path separators).
  • Sequence ([seq]) — matches any character in sequence.

:book: A few additional words about the basic matching behavior.

Some examples:

  • src/**/*.js — matches all files in the src directory (any level of nesting) that have the .js extension.
  • src/*.?? — matches all files in the src directory (only first level of nesting) that have a two-character extension.
  • file-[01].js — matches files: file-0.js, file-1.js.

Advanced syntax

:book: A few additional words about the advanced matching behavior.

Some examples:

  • src/**/*.{css,scss} — matches all files in the src directory (any level of nesting) that have the .css or .scss extension.
  • file-[[:digit:]].js — matches files: file-0.js, file-1.js, …, file-9.js.
  • file-{1..3}.js — matches files: file-1.js, file-2.js, file-3.js.
  • file-(1|2) — matches files: file-1.js, file-2.js.

Installation

npm install fast-glob

API

Asynchronous

Returns a Promise with an array of matching entries.

Synchronous

Returns an array of matching entries.

Stream

Returns a ReadableStream when the data event will be emitted with matching entry.

patterns

  • Required: true
  • Type: string | string[]

Any correct pattern(s).

:1234: Pattern syntax

:warning: This package does not respect the order of patterns. First, all the negative patterns are applied, and only then the positive patterns. If you want to get a certain order of records, use sorting or split calls.

options

See Options section.

Helpers

generateTasks(patterns, [options])

Returns the internal representation of patterns (Task is a combining patterns by base directory).

patterns
  • Required: true
  • Type: string | string[]

Any correct pattern(s).

options

See Options section.

isDynamicPattern(pattern, [options])

Returns true if the passed pattern is a dynamic pattern.

:1234: What is a static or dynamic pattern?

pattern
  • Required: true
  • Type: string

Any correct pattern.

options

See Options section.

escapePath(pattern)

Returns a path with escaped special characters (*?|(){}[], ! at the beginning of line, @+! before the opening parenthesis).

pattern
  • Required: true
  • Type: string

Any string, for example, a path to a file.

Options

Common options

concurrency

  • Type: number
  • Default: os.cpus().length

Specifies the maximum number of concurrent requests from a reader to read directories.

:book: The higher the number, the higher the performance and load on the file system. If you want to read in quiet mode, set the value to a comfortable number or 1.

cwd

  • Type: string
  • Default: process.cwd()

The current working directory in which to search.

deep

  • Type: number
  • Default: Infinity

Specifies the maximum depth of a read directory relative to the start directory.

For example, you have the following tree:

:book: If you specify a pattern with some base directory, this directory will not participate in the calculation of the depth of the found directories. Think of it as a cwd option.

  • Type: boolean
  • Default: true

Indicates whether to traverse descendants of symbolic link directories.

:book: If the stats option is specified, the information about the symbolic link (fs.lstat) will be replaced with information about the entry (fs.stat) behind it.

fs

  • Type: FileSystemAdapter
  • Default: fs.*

Custom implementation of methods for working with the file system.

ignore

  • Type: string[]
  • Default: []

An array of glob patterns to exclude matches. This is an alternative way to use negative patterns.

suppressErrors

  • Type: boolean
  • Default: false

By default this package suppress only ENOENT errors. Set to true to suppress any error.

:book: Can be useful when the directory has entries with a special level of access.

  • Type: boolean
  • Default: false

Throw an error when symbolic link is broken if true or safely return lstat call if false.

:book: This option has no effect on errors when reading the symbolic link directory.

Output control

absolute

  • Type: boolean
  • Default: false

Return the absolute path for entries.

:book: This option is required if you want to use negative patterns with absolute path, for example, !${__dirname}/*.js.

markDirectories

  • Type: boolean
  • Default: false

Mark the directory path with the final slash.

objectMode

  • Type: boolean
  • Default: false

Returns objects (instead of strings) describing entries.

The object has the following fields:

  • name (string) — the last part of the path (basename)
  • path (string) — full path relative to the pattern base directory
  • dirent (fs.Dirent) — instance of fs.Direct

:book: An object is an internal representation of entry, so getting it does not affect performance.

onlyDirectories

  • Type: boolean
  • Default: false

Return only directories.

:book: If true, the onlyFiles option is automatically false.

onlyFiles

  • Type: boolean
  • Default: true

Return only files.

stats

  • Type: boolean
  • Default: false

Enables an object mode with an additional field:

  • stats (fs.Stats) — instance of fs.Stats

:book: Returns fs.stat instead of fs.lstat for symbolic links when the followSymbolicLinks option is specified.

:warning: Unlike object mode this mode requires additional calls to the file system. On average, this mode is slower at least twice. See old and modern mode for more details.

unique

  • Type: boolean
  • Default: true

Ensures that the returned entries are unique.

If true and similar entries are found, the result is the first found.

Matching control

braceExpansion

  • Type: boolean
  • Default: true

Enables Bash-like brace expansion.

:1234: Syntax description or more detailed description.

caseSensitiveMatch

  • Type: boolean
  • Default: true

Enables a case-sensitive mode for matching files.

dot

  • Type: boolean
  • Default: false

Allow patterns to match entries that begin with a period (.).

:book: Note that an explicit dot in a portion of the pattern will always match dot files.

extglob

  • Type: boolean
  • Default: true

Enables Bash-like extglob functionality.

:1234: Syntax description.

globstar

  • Type: boolean
  • Default: true

Enables recursively repeats a pattern containing **. If false, ** behaves exactly like *.

baseNameMatch

  • Type: boolean
  • Default: false

If set to true, then patterns without slashes will be matched against the basename of the path if it contains slashes.

FAQ

What is a static or dynamic pattern?

All patterns can be divided into two types:

  • static. A pattern is considered static if it can be used to get an entry on the file system without using matching mechanisms. For example, the file.js pattern is a static pattern because we can just verify that it exists on the file system.
  • dynamic. A pattern is considered dynamic if it cannot be used directly to find occurrences without using a matching mechanisms. For example, the * pattern is a dynamic pattern because we cannot use this pattern directly.

A pattern is considered dynamic if it contains the following characters ( — any characters or their absence) or options:

  • The caseSensitiveMatch option is disabled
  • \\ (the escape character)
  • *, ?, ! (at the beginning of line)
  • […]
  • (…|…)
  • @(…), !(…), *(…), ?(…), +(…) (respects the extglob option)
  • {…,…}, {…..…} (respects the braceExpansion option)

How to write patterns on Windows?

Always use forward-slashes in glob expressions (patterns and ignore option). Use backslashes for escaping characters. With the cwd option use a convenient format.

Bad

Good

:book: Use the normalize-path or the unixify package to convert Windows-style path to a Unix-style path.

Read more about matching with backslashes.

Why are parentheses match wrong?

Refers to Bash. You need to escape special characters:

Read more about matching special characters as literals.

How to exclude directory from reading?

You can use a negative pattern like this: !**/node_modules or !**/node_modules/**. Also you can use ignore option. Just look at the example below.

If you don’t want to read the second directory, you must write the following pattern: !**/second or !**/second/**.

:warning: When you write !**/second/**/* it means that the directory will be read, but all the entries will not be included in the results.

You have to understand that if you write the pattern to exclude directories, then the directory will not be read under any circumstances.

How to use UNC path?

You cannot use Uniform Naming Convention (UNC) paths as patterns (due to syntax), but you can use them as cwd directory.

Compatible with node-glob?

node-glob fast-glob
cwd cwd
root
dot dot
nomount
mark markDirectories
nosort
nounique unique
nobrace braceExpansion
noglobstar globstar
noext extglob
nocase caseSensitiveMatch
matchBase baseNameMatch
nodir onlyFiles
ignore ignore
follow followSymbolicLinks
realpath
absolute absolute

Benchmarks

Server

Link: Vultr Bare Metal

You can see results here for latest release.

Nettop

Link: Zotac bi323

You can see results here for latest release.

Changelog

See the Releases section of our GitHub project for changelog for each release version.



Source Map

Build Status

NPM

This is a library to generate and consume the source map format described here.

Use with Node

npm install source-map

Use on the Web


Table of Contents

Examples

Consuming a source map

Generating a source map

In depth guide: Compiling to JavaScript, and Debugging with Source Maps

With SourceNode (high level API)

With SourceMapGenerator (low level API)

API

Get a reference to the module:

SourceMapConsumer

A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.

new SourceMapConsumer(rawSourceMap)

The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:

  • version: Which version of the source map spec this map is following.

  • sources: An array of URLs to the original source files.

  • names: An array of identifiers which can be referenced by individual mappings.

  • sourceRoot: Optional. The URL root from which all sources are relative.

  • sourcesContent: Optional. An array of contents of the original source files.

  • mappings: A string of base64 VLQs which contain the actual mappings.

  • file: Optional. The generated filename this source map is associated with.

SourceMapConsumer.prototype.computeColumnSpans()

Compute the last column for each generated mapping. The last column is inclusive.

SourceMapConsumer.prototype.originalPositionFor(generatedPosition)

Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:

  • line: The line number in the generated source.

  • column: The column number in the generated source.

  • bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.

and an object is returned with the following properties:

  • source: The original source file, or null if this information is not available.

  • line: The line number in the original source, or null if this information is not available.

  • column: The column number in the original source, or null if this information is not available.

  • name: The original identifier, or null if this information is not available.

SourceMapConsumer.prototype.generatedPositionFor(originalPosition)

Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: The column number in the original source.

and an object is returned with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)

Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.

The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: Optional. The column number in the original source.

and an array of objects is returned, each with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.hasContentsOfAllSources()

Return true if we have the embedded source content for every source listed in the source map, false otherwise.

In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.

SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])

Returns the original source content for the source provided. The only argument is the URL of the original source file.

If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.

SourceMapConsumer.prototype.eachMapping(callback, context, order)

Iterate over each mapping between an original source/line/column and a generated line/column in this source map.

  • callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }

  • context: Optional. If specified, this object will be the value of this every time that callback is called.

  • order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.

SourceMapGenerator

An instance of the SourceMapGenerator represents a source map which is being built incrementally.

new SourceMapGenerator([startOfSourceMap])

You may pass an object with the following properties:

  • file: The filename of the generated source that this source map is associated with.

  • sourceRoot: A root for all relative URLs in this source map.

  • skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.

SourceMapGenerator.fromSourceMap(sourceMapConsumer)

Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.

  • sourceMapConsumer The SourceMap.

SourceMapGenerator.prototype.addMapping(mapping)

Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:

  • generated: An object with the generated line and column positions.

  • original: An object with the original line and column positions.

  • source: The original source file (relative to the sourceRoot).

  • name: An optional original token name for this mapping.

SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for an original source file.

  • sourceFile the URL of the original source file.

  • sourceContent the content of the source file.

SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])

Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.

  • sourceMapConsumer: The SourceMap to be applied.

  • sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.

  • sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.

    This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.

    If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)

SourceMapGenerator.prototype.toString()

Renders the source map being generated to a string.

SourceNode

SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.

new SourceNode([line, column, source[, chunk[, name]]])

  • line: The original line number associated with this source node, or null if it isn’t associated with an original line.

  • column: The original column number associated with this source node, or null if it isn’t associated with an original column.

  • source: The original source’s filename; null if no filename is provided.

  • chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.

  • name: Optional. The original identifier.

SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])

Creates a SourceNode from generated code and a SourceMapConsumer.

  • code: The generated code

  • sourceMapConsumer The SourceMap for the generated code

  • relativePath The optional path that relative sources in sourceMapConsumer should be relative to.

SourceNode.prototype.add(chunk)

Add a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.prepend(chunk)

Prepend a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.

  • sourceFile: The filename of the source file

  • sourceContent: The content of the source file

SourceNode.prototype.walk(fn)

Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.

  • fn: The traversal function.

SourceNode.prototype.walkSourceContents(fn)

Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.

  • fn: The traversal function.

SourceNode.prototype.join(sep)

Like Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.

  • sep: The separator.

SourceNode.prototype.replaceRight(pattern, replacement)

Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.

  • pattern: The pattern to replace.

  • replacement: The thing to replace the pattern with.

SourceNode.prototype.toString()

Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.

SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])

Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.

The arguments are the same as those to new SourceMapGenerator.



Source Map

Build Status

NPM

This is a library to generate and consume the source map format described here.

Use with Node

npm install source-map

Use on the Web


Table of Contents

Examples

Consuming a source map

Generating a source map

In depth guide: Compiling to JavaScript, and Debugging with Source Maps

With SourceNode (high level API)

With SourceMapGenerator (low level API)

API

Get a reference to the module:

SourceMapConsumer

A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.

new SourceMapConsumer(rawSourceMap)

The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:

  • version: Which version of the source map spec this map is following.

  • sources: An array of URLs to the original source files.

  • names: An array of identifiers which can be referenced by individual mappings.

  • sourceRoot: Optional. The URL root from which all sources are relative.

  • sourcesContent: Optional. An array of contents of the original source files.

  • mappings: A string of base64 VLQs which contain the actual mappings.

  • file: Optional. The generated filename this source map is associated with.

SourceMapConsumer.prototype.computeColumnSpans()

Compute the last column for each generated mapping. The last column is inclusive.

SourceMapConsumer.prototype.originalPositionFor(generatedPosition)

Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:

  • line: The line number in the generated source.

  • column: The column number in the generated source.

  • bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.

and an object is returned with the following properties:

  • source: The original source file, or null if this information is not available.

  • line: The line number in the original source, or null if this information is not available.

  • column: The column number in the original source, or null if this information is not available.

  • name: The original identifier, or null if this information is not available.

SourceMapConsumer.prototype.generatedPositionFor(originalPosition)

Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: The column number in the original source.

and an object is returned with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)

Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.

The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: Optional. The column number in the original source.

and an array of objects is returned, each with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.hasContentsOfAllSources()

Return true if we have the embedded source content for every source listed in the source map, false otherwise.

In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.

SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])

Returns the original source content for the source provided. The only argument is the URL of the original source file.

If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.

SourceMapConsumer.prototype.eachMapping(callback, context, order)

Iterate over each mapping between an original source/line/column and a generated line/column in this source map.

  • callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }

  • context: Optional. If specified, this object will be the value of this every time that callback is called.

  • order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.

SourceMapGenerator

An instance of the SourceMapGenerator represents a source map which is being built incrementally.

new SourceMapGenerator([startOfSourceMap])

You may pass an object with the following properties:

  • file: The filename of the generated source that this source map is associated with.

  • sourceRoot: A root for all relative URLs in this source map.

  • skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.

SourceMapGenerator.fromSourceMap(sourceMapConsumer)

Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.

  • sourceMapConsumer The SourceMap.

SourceMapGenerator.prototype.addMapping(mapping)

Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:

  • generated: An object with the generated line and column positions.

  • original: An object with the original line and column positions.

  • source: The original source file (relative to the sourceRoot).

  • name: An optional original token name for this mapping.

SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for an original source file.

  • sourceFile the URL of the original source file.

  • sourceContent the content of the source file.

SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])

Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.

  • sourceMapConsumer: The SourceMap to be applied.

  • sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.

  • sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.

    This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.

    If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)

SourceMapGenerator.prototype.toString()

Renders the source map being generated to a string.

SourceNode

SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.

new SourceNode([line, column, source[, chunk[, name]]])

  • line: The original line number associated with this source node, or null if it isn’t associated with an original line.

  • column: The original column number associated with this source node, or null if it isn’t associated with an original column.

  • source: The original source’s filename; null if no filename is provided.

  • chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.

  • name: Optional. The original identifier.

SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])

Creates a SourceNode from generated code and a SourceMapConsumer.

  • code: The generated code

  • sourceMapConsumer The SourceMap for the generated code

  • relativePath The optional path that relative sources in sourceMapConsumer should be relative to.

SourceNode.prototype.add(chunk)

Add a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.prepend(chunk)

Prepend a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.

  • sourceFile: The filename of the source file

  • sourceContent: The content of the source file

SourceNode.prototype.walk(fn)

Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.

  • fn: The traversal function.

SourceNode.prototype.walkSourceContents(fn)

Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.

  • fn: The traversal function.

SourceNode.prototype.join(sep)

Like Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.

  • sep: The separator.

SourceNode.prototype.replaceRight(pattern, replacement)

Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.

  • pattern: The pattern to replace.

  • replacement: The thing to replace the pattern with.

SourceNode.prototype.toString()

Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.

SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])

Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.

The arguments are the same as those to new SourceMapGenerator.



braces NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

Why use braces?

Brace patterns are great for matching ranges. Users (and implementors) shouldn’t have to think about whether or not they will break their application (or yours) from accidentally defining an aggressive brace pattern. Braces is the only library that offers a solution to this problem.

Usage

The main export is a function that takes one or more brace patterns and options.

By default, braces returns an optimized regex-source string. To get an array of brace patterns, use brace.expand().

The following section explains the difference in more detail. (If you’re curious about “why” braces does this by default, see brace matching pitfalls.

Optimized vs. expanded braces

Optimized

By default, patterns are optimized for regex and matching:

Expanded

To expand patterns the same way as Bash or minimatch, use the .expand method:

Or use options.expand:

Features

Lists

Uses fill-range for expanding alphabetical or numeric lists:

Sequences

Uses fill-range for expanding alphabetical or numeric ranges (bash “sequences”):

Steps

Steps, or increments, may be used with ranges:

When the .optimize method is used, or options.optimize is set to true, sequences are passed to to-regex-range for expansion.

Nesting

Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved.

“Expanded” braces

“Optimized” braces

Escaping

Escaping braces

A brace pattern will not be expanded or evaluted if either the opening or closing brace is escaped:

Escaping commas

Commas inside braces may also be escaped:

Single items

Following bash conventions, a brace pattern is also not expanded when it contains a single character:

Options

options.maxLength

Type: Number

Default: 65,536

Description: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera.

options.expand

Type: Boolean

Default: undefined

Description: Generate an “expanded” brace pattern (this option is unncessary with the .expand method, which does the same thing).

options.optimize

Type: Boolean

Default: true

Description: Enabled by default.

options.nodupes

Type: Boolean

Default: true

Description: Duplicates are removed by default. To keep duplicates, pass {nodupes: false} on the options

options.rangeLimit

Type: Number

Default: 250

Description: When braces.expand() is used, or options.expand is true, brace patterns will automatically be optimized when the difference between the range minimum and range maximum exceeds the rangeLimit. This is to prevent huge ranges from freezing your application.

You can set this to any number, or change options.rangeLimit to Inifinity to disable this altogether.

Examples

options.transform

Type: Function

Default: undefined

Description: Customize range expansion.

options.quantifiers

Type: Boolean

Default: undefined

Description: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, a{1,3} will match the letter a one to three times.

Unfortunately, regex quantifiers happen to share the same syntax as Bash lists

The quantifiers option tells braces to detect when regex quantifiers are defined in the given pattern, and not to try to expand them as lists.

Examples

options.unescape

Type: Boolean

Default: undefined

Description: Strip backslashes that were used for escaping from the result.

What is “brace expansion”?

Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs).

In addition to “expansion”, braces are also used for matching. In other words:

More about brace expansion (click to expand)

There are two main types of brace expansion:

  1. lists: which are defined using comma-separated values inside curly braces: {a,b,c}
  2. sequences: which are defined using a starting value and an ending value, separated by two dots: a{1..3}b. Optionally, a third argument may be passed to define a “step” or increment to use: a{1..100..10}b. These are also sometimes referred to as “ranges”.

Here are some example brace patterns to illustrate how they work:

Sets

{a,b,c}       => a b c
{a,b,c}{1,2}  => a1 a2 b1 b2 c1 c2

Sequences

{1..9}        => 1 2 3 4 5 6 7 8 9
{4..-4}       => 4 3 2 1 0 -1 -2 -3 -4
{1..20..3}    => 1 4 7 10 13 16 19
{a..j}        => a b c d e f g h i j
{j..a}        => j i h g f e d c b a
{a..z..3}     => a d g j m p s v y

Combination

Sets and sequences can be mixed together or used along with any other strings.

{a,b,c}{1..3}   => a1 a2 a3 b1 b2 b3 c1 c2 c3
foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar

The fact that braces can be “expanded” from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases.

Brace matching

In addition to expansion, brace patterns are also useful for performing regular-expression-like matching.

For example, the pattern foo/{1..3}/bar would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar

But not:

baz/1/qux
baz/2/qux
baz/3/qux

Braces can also be combined with glob patterns to perform more advanced wildcard matching. For example, the pattern */{1..3}/* would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux

Brace matching pitfalls

Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of.

tldr

“brace bombs”

  • brace expansion can eat up a huge amount of processing resources
  • as brace patterns increase linearly in size, the system resources required to expand the pattern increase exponentially
  • users can accidentally (or intentially) exhaust your system’s resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!)

For a more detailed explanation with examples, see the geometric complexity section.

The solution

Jump to the performance section to see how Braces solves this problem in comparison to other libraries.

Geometric complexity

At minimum, brace patterns with sets limited to two elements have quadradic or O(n^2) complexity. But the complexity of the algorithm increases exponentially as the number of sets, and elements per set, increases, which is O(n^c).

For example, the following sets demonstrate quadratic (O(n^2)) complexity:

{1,2}{3,4}      => (2X2)    => 13 14 23 24
{1,2}{3,4}{5,6} => (2X2X2)  => 135 136 145 146 235 236 245 246

But add an element to a set, and we get a n-fold Cartesian product with O(n^c) complexity:

{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 
                                    249 257 258 259 267 268 269 347 348 349 357 
                                    358 359 367 368 369

Now, imagine how this complexity grows given that each element is a n-tuple:

{1..100}{1..100}         => (100X100)     => 10,000 elements (38.4 kB)
{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB)

Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control.

More information

Interested in learning more about brace expansion?

Performance

Braces is not only screaming fast, it’s also more accurate the other brace expansion libraries.

Better algorithms

Fortunately there is a solution to the “brace bomb” problem: don’t expand brace patterns into an array when they’re used for matching.

Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently.

The proof is in the numbers

Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using braces() and minimatch.braceExpand(), respectively.

Pattern braces minimatch
{1..9007199254740991}1 298 B (5ms 459μs) N/A (freezes)
{1..1000000000000000} 41 B (1ms 15μs) N/A (freezes)
{1..100000000000000} 40 B (890μs) N/A (freezes)
{1..10000000000000} 39 B (2ms 49μs) N/A (freezes)
{1..1000000000000} 38 B (608μs) N/A (freezes)
{1..100000000000} 37 B (397μs) N/A (freezes)
{1..10000000000} 35 B (983μs) N/A (freezes)
{1..1000000000} 34 B (798μs) N/A (freezes)
{1..100000000} 33 B (733μs) N/A (freezes)
{1..10000000} 32 B (5ms 632μs) 78.89 MB (16s 388ms 569μs)
{1..1000000} 31 B (1ms 381μs) 6.89 MB (1s 496ms 887μs)
{1..100000} 30 B (950μs) 588.89 kB (146ms 921μs)
{1..10000} 29 B (1ms 114μs) 48.89 kB (14ms 187μs)
{1..1000} 28 B (760μs) 3.89 kB (1ms 453μs)
{1..100} 22 B (345μs) 291 B (196μs)
{1..10} 10 B (533μs) 20 B (37μs)
{1..3} 7 B (190μs) 5 B (27μs)

Faster algorithms

When you need expansion, braces is still much faster.

(the following results were generated using braces.expand() and minimatch.braceExpand(), respectively)

Pattern braces minimatch
{1..10000000} 78.89 MB (2s 698ms 642μs) 78.89 MB (18s 601ms 974μs)
{1..1000000} 6.89 MB (458ms 576μs) 6.89 MB (1s 491ms 621μs)
{1..100000} 588.89 kB (20ms 728μs) 588.89 kB (156ms 919μs)
{1..10000} 48.89 kB (2ms 202μs) 48.89 kB (13ms 641μs)
{1..1000} 3.89 kB (1ms 796μs) 3.89 kB (1ms 958μs)
{1..100} 291 B (424μs) 291 B (211μs)
{1..10} 20 B (487μs) 20 B (72μs)
{1..3} 5 B (166μs) 5 B (27μs)

If you’d like to run these comparisons yourself, see test/support/generate.js.

Benchmarks

Running benchmarks

Install dev dependencies:

Latest results

Benchmarking: (8 of 8)
 · combination-nested
 · combination
 · escaped
 · list-basic
 · list-multiple
 · no-braces
 · sequence-basic
 · sequence-multiple

# benchmark/fixtures/combination-nested.js (52 bytes)
  brace-expansion x 4,756 ops/sec ±1.09% (86 runs sampled)
  braces x 11,202,303 ops/sec ±1.06% (88 runs sampled)
  minimatch x 4,816 ops/sec ±0.99% (87 runs sampled)

  fastest is braces

# benchmark/fixtures/combination.js (51 bytes)
  brace-expansion x 625 ops/sec ±0.87% (87 runs sampled)
  braces x 11,031,884 ops/sec ±0.72% (90 runs sampled)
  minimatch x 637 ops/sec ±0.84% (88 runs sampled)

  fastest is braces

# benchmark/fixtures/escaped.js (44 bytes)
  brace-expansion x 163,325 ops/sec ±1.05% (87 runs sampled)
  braces x 10,655,071 ops/sec ±1.22% (88 runs sampled)
  minimatch x 147,495 ops/sec ±0.96% (88 runs sampled)

  fastest is braces

# benchmark/fixtures/list-basic.js (40 bytes)
  brace-expansion x 99,726 ops/sec ±1.07% (83 runs sampled)
  braces x 10,596,584 ops/sec ±0.98% (88 runs sampled)
  minimatch x 100,069 ops/sec ±1.17% (86 runs sampled)

  fastest is braces

# benchmark/fixtures/list-multiple.js (52 bytes)
  brace-expansion x 34,348 ops/sec ±1.08% (88 runs sampled)
  braces x 9,264,131 ops/sec ±1.12% (88 runs sampled)
  minimatch x 34,893 ops/sec ±0.87% (87 runs sampled)

  fastest is braces

# benchmark/fixtures/no-braces.js (48 bytes)
  brace-expansion x 275,368 ops/sec ±1.18% (89 runs sampled)
  braces x 9,134,677 ops/sec ±0.95% (88 runs sampled)
  minimatch x 3,755,954 ops/sec ±1.13% (89 runs sampled)

  fastest is braces

# benchmark/fixtures/sequence-basic.js (41 bytes)
  brace-expansion x 5,492 ops/sec ±1.35% (87 runs sampled)
  braces x 8,485,034 ops/sec ±1.28% (89 runs sampled)
  minimatch x 5,341 ops/sec ±1.17% (87 runs sampled)

  fastest is braces

# benchmark/fixtures/sequence-multiple.js (51 bytes)
  brace-expansion x 116 ops/sec ±0.77% (77 runs sampled)
  braces x 9,445,118 ops/sec ±1.32% (84 runs sampled)
  minimatch x 109 ops/sec ±1.16% (76 runs sampled)

  fastest is braces

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • expand-brackets: Expand POSIX bracket expressions (character classes) in glob patterns. | homepage
  • extglob: Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • micromatch: Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | homepage
  • nanomatch: Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash… more | homepage
Commits Contributor
188 jonschlinkert
4 doowb
1 es128
1 eush77
1 hemanth

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 17, 2018.


  1. this is the largest safe integer allowed in JavaScript.



date-and-time

Circle CI

This library is a minimalist collection of functions for manipulating JS date and time. It’s tiny, simple, easy to learn.

Why

JS modules nowadays are getting more huge and complex, and there are also many dependencies. Trying to keep each module simple and small is meaningful.

Features

  • Minimalist. Approximately 2k. (minified and gzipped)
  • Extensible. Plugin system support.
  • Multi language support.
  • Universal / Isomorphic. Works wherever.
  • Older browser support. Even works on IE6. :)

Install

  • via npm:
npm install date-and-time --save
  • local:

Recent Changes

Usage

  • Node.js:
  • With a transpiler:
  • The browser:

API

format(dateObj, formatString[, utc])

  • Formatting a date.
    • @param {Date} dateObj - a Date object
    • @param {string|Array.<string>} arg - a format string or a compiled object
    • @param {boolean} [utc] - output as UTC
    • @returns {string} a formatted string

Available tokens and their meanings are as follows:

token meaning examples of output
YYYY four-digit year 0999, 2015
YY two-digit year 99, 01, 15
Y four-digit year without zero-padding 2, 44, 888, 2015
MMMM month name (long) January, December
MMM month name (short) Jan, Dec
MM month with zero-padding 01, 12
M month 1, 12
DD date with zero-padding 02, 31
D date 2, 31
dddd day of week (long) Friday, Sunday
ddd day of week (short) Fri, Sun
dd day of week (very short) Fr, Su
HH 24-hour with zero-padding 23, 08
H 24-hour 23, 8
hh 12-hour with zero-padding 11, 08
h 12-hour 11, 8
A meridiem (uppercase) AM, PM
mm minute with zero-padding 14, 07
m minute 14, 7
ss second with zero-padding 05, 10
s second 5, 10
SSS millisecond (high accuracy) 753, 022
SS millisecond (middle accuracy) 75, 02
S millisecond (low accuracy) 7, 0
Z timezone offset +0100, -0800

You can also use the following tokens by importing plugins. See PLUGINS.md for details.

token meaning examples of output
DDD ordinal notation of date 1st, 2nd, 3rd
AA meridiem (uppercase with ellipsis) A.M., P.M.
a meridiem (lowercase) am, pm
aa meridiem (lowercase with ellipsis) a.m., p.m.

NOTE 1. Comments

String in parenthese [...] in the formatString will be ignored as comments:

NOTE 2. Output as UTC

This function usually outputs a local date-time string. Set to true the utc option (the 3rd parameter) if you would like to get a UTC date-time string.

NOTE 3. More Tokens

You can also define your own tokens. See EXTEND.md for details.

parse(dateString, arg[, utc])

  • Parsing a date string.
    • @param {string} dateString - a date string
    • @param {string|Array.<string>} arg - a format string or a compiled object
    • @param {boolean} [utc] - input as UTC
    • @returns {Date} a constructed date

Available tokens and their meanings are as follows:

token meaning examples of acceptable form
YYYY four-digit year 0999, 2015
Y four-digit year without zero-padding 2, 44, 88, 2015
MMMM month name (long) January, December
MMM month name (short) Jan, Dec
MM month with zero-padding 01, 12
M month 1, 12
DD date with zero-padding 02, 31
D date 2, 31
HH 24-hour with zero-padding 23, 08
H 24-hour 23, 8
hh 12-hour with zero-padding 11, 08
h 12-hour 11, 8
A meridiem (uppercase) AM, PM
mm minute with zero-padding 14, 07
m minute 14, 7
ss second with zero-padding 05, 10
s second 5, 10
SSS millisecond (high accuracy) 753, 022
SS millisecond (middle accuracy) 75, 02
S millisecond (low accuracy) 7, 0
Z timezone offset +0100, -0800

You can also use the following tokens by importing plugins. See PLUGINS.md for details.

token meaning examples of acceptable form
YY two-digit year 90, 00, 08, 19
Y two-digit year without zero-padding 90, 0, 8, 19
A meridiem AM, PM, A.M., P.M., am, pm, a.m., p.m.
dddd day of week (long) Friday, Sunday
ddd day of week (short) Fri, Sun
dd day of week (very short) Fr, Su
SSSSSS microsecond (high accuracy) 123456, 000001
SSSSS microsecond (middle accuracy) 12345, 00001
SSSS microsecond (low accuracy) 1234, 0001

NOTE 1. Invalid Date

If the function fails to parse, it will return Invalid Date. Notice that the Invalid Date is a Date object, not NaN or null. You can tell whether the Date object is invalid as follows:

NOTE 2. Input as UTC

This function usually assumes the dateString is a local date-time. Set to true the utc option (the 3rd parameter) if it is a UTC date-time.

NOTE 3. Default Date Time

Default date is January 1, 1970, time is 00:00:00.000. Values not passed will be complemented with them:

NOTE 4. Max Date / Min Date

Parsable maximum date is December 31, 9999, minimum date is January 1, 0001.

NOTE 5. 12-hour notation and Meridiem

If use hh or h (12-hour) token, use together A (meridiem) token to get the right value.

NOTE 6. Token disablement

Use square brackets [] if a date-time string includes some token characters. Tokens inside square brackets in the formatString will be interpreted as normal characters:

NOTE 7. Wildcard

A white space works as a wildcard token. This token is not interpret into anything. This means it can be ignored a specific variable string. For example, when you would like to ignore a time part from a date string, you can write as follows:

NOTE 8. Ellipsis

The parser supports ... (ellipse) token. The above example can also be written like this:

compile(formatString)

  • Compiling a format string for the parser.
    • @param {string} formatString - a format string
    • @returns {Array.<string>} a compiled object

If you are going to call the format(), the parse() or the isValid() many times with one string format, recommended to precompile and reuse it for performance.

preparse(dateString, arg)

  • Pre-parsing a date string.
    • @param {string} dateString - a date string
    • @param {string|Array.<string>} arg - a format string or a compiled object
    • @returns {Object} a date structure

This function takes exactly the same parameters with the parse(), but returns a date structure as follows unlike that:

This date structure provides a parsing result. You will be able to tell from it how the date string was parsed(, or why the parsing was failed).

isValid(arg1[, arg2])

  • Validation.
    • @param {Object|string} arg1 - a date structure or a date string
    • @param {string|Array.<string>} [arg2] - a format string or a compiled object
    • @returns {boolean} whether the date string is a valid date

This function takes either exactly the same parameters with the parse() or a date structure which the preparse() returns, evaluates the validity of them.

transform(dateString, arg1, arg2[, utc])

  • Transformation of date string.
    • @param {string} dateString - a date string
    • @param {string|Array.<string>} arg1 - the format string of the date string or the compiled object
    • @param {string|Array.<string>} arg2 - the transformed format string or the compiled object
    • @param {boolean} [utc] - output as UTC
    • @returns {string} a formatted string

This function transforms the format of a date string. The 2nd parameter, arg1, is the format string of it. Available token list is equal to the parse()’s. The 3rd parameter, arg2, is the transformed format string. Available token list is equal to the format()’s.

addYears(dateObj, years)

  • Adding years.
    • @param {Date} dateObj - a Date object
    • @param {number} years - number of years to add
    • @returns {Date} a date after adding the value

addMonths(dateObj, months)

  • Adding months.
    • @param {Date} dateObj - a Date object
    • @param {number} months - number of months to add
    • @returns {Date} a date after adding the value

addDays(dateObj, days)

  • Adding days.
    • @param {Date} dateObj - a Date object
    • @param {number} days - number of days to add
    • @returns {Date} a date after adding the value

addHours(dateObj, hours)

  • Adding hours.
    • @param {Date} dateObj - a Date object
    • @param {number} hours - number of hours to add
    • @returns {Date} a date after adding the value

addMinutes(dateObj, minutes)

  • Adding minutes.
    • @param {Date} dateObj - a Date object
    • @param {number} minutes - number of minutes to add
    • @returns {Date} a date after adding the value

addSeconds(dateObj, seconds)

  • Adding seconds.
    • @param {Date} dateObj - a Date object
    • @param {number} seconds - number of seconds to add
    • @returns {Date} a date after adding the value

addMilliseconds(dateObj, milliseconds)

  • Adding milliseconds.
    • @param {Date} dateObj - a Date object
    • @param {number} milliseconds - number of milliseconds to add
    • @returns {Date} a date after adding the value

subtract(date1, date2)

  • Subtracting.
    • @param {Date} date1 - a Date object
    • @param {Date} date2 - a Date object
    • @returns {Object} a result object subtracting date2 from date1

isLeapYear(y)

  • Leap year.
    • @param {number} y - year
    • @returns {boolean} whether the year is a leap year

isSameDay(date1, date2)

  • Comparison of two dates.
    • @param {Date} date1 - a Date object
    • @param {Date} date2 - a Date object
    • @returns {boolean} whether the dates are the same day (times are ignored)

locale([code[, locale]])

  • Change locale or setting a new locale definition.
    • @param {string} code - language code
    • @param {Object} [locale] - locale definition
    • @returns {string} current language code

It returns a current language code if called without any parameters.

To switch to any other language, call it with a language code.

See LOCALE.md for details.

extend(extension)

  • Locale extension.
    • @param {Object} extension - locale definition
    • @returns {void}

Extend a current locale. See EXTEND.md for details.

plugin(name[, extension])

  • Plugin import or definition.
    • @param {string} name - plugin name
    • @param {Object} extension - locale definition
    • @returns {void}

Plugin is a named locale definition defined with the extend(). See PLUGINS.md for details.

Chrome, Firefox, Safari, Edge, and Internet Explorer 6+.



Source Map

Build Status

NPM

This is a library to generate and consume the source map format described here.

Use with Node

npm install source-map

Use on the Web

<script src="https://raw.githubusercontent.com/mozilla/source-map/master/dist/source-map.min.js" defer></script>

Table of Contents

Examples

Consuming a source map

Generating a source map

In depth guide: Compiling to JavaScript, and Debugging with Source Maps

With SourceNode (high level API)

With SourceMapGenerator (low level API)

API

Get a reference to the module:

SourceMapConsumer

A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.

new SourceMapConsumer(rawSourceMap)

The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:

  • version: Which version of the source map spec this map is following.

  • sources: An array of URLs to the original source files.

  • names: An array of identifiers which can be referenced by individual mappings.

  • sourceRoot: Optional. The URL root from which all sources are relative.

  • sourcesContent: Optional. An array of contents of the original source files.

  • mappings: A string of base64 VLQs which contain the actual mappings.

  • file: Optional. The generated filename this source map is associated with.

SourceMapConsumer.prototype.computeColumnSpans()

Compute the last column for each generated mapping. The last column is inclusive.

SourceMapConsumer.prototype.originalPositionFor(generatedPosition)

Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:

  • line: The line number in the generated source. Line numbers in this library are 1-based (note that the underlying source map specification uses 0-based line numbers – this library handles the translation).

  • column: The column number in the generated source. Column numbers in this library are 0-based.

  • bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.

and an object is returned with the following properties:

  • source: The original source file, or null if this information is not available.

  • line: The line number in the original source, or null if this information is not available. The line number is 1-based.

  • column: The column number in the original source, or null if this information is not available. The column number is 0-based.

  • name: The original identifier, or null if this information is not available.

SourceMapConsumer.prototype.generatedPositionFor(originalPosition)

Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source. The line number is 1-based.

  • column: The column number in the original source. The column number is 0-based.

and an object is returned with the following properties:

  • line: The line number in the generated source, or null. The line number is 1-based.

  • column: The column number in the generated source, or null. The column number is 0-based.

SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)

Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.

The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source. The line number is 1-based.

  • column: Optional. The column number in the original source. The column number is 0-based.

and an array of objects is returned, each with the following properties:

  • line: The line number in the generated source, or null. The line number is 1-based.

  • column: The column number in the generated source, or null. The column number is 0-based.

SourceMapConsumer.prototype.hasContentsOfAllSources()

Return true if we have the embedded source content for every source listed in the source map, false otherwise.

In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.

SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])

Returns the original source content for the source provided. The only argument is the URL of the original source file.

If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.

SourceMapConsumer.prototype.eachMapping(callback, context, order)

Iterate over each mapping between an original source/line/column and a generated line/column in this source map.

  • callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }

  • context: Optional. If specified, this object will be the value of this every time that callback is called.

  • order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.

SourceMapGenerator

An instance of the SourceMapGenerator represents a source map which is being built incrementally.

new SourceMapGenerator([startOfSourceMap])

You may pass an object with the following properties:

  • file: The filename of the generated source that this source map is associated with.

  • sourceRoot: A root for all relative URLs in this source map.

  • skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.

SourceMapGenerator.fromSourceMap(sourceMapConsumer)

Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.

  • sourceMapConsumer The SourceMap.

SourceMapGenerator.prototype.addMapping(mapping)

Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:

  • generated: An object with the generated line and column positions.

  • original: An object with the original line and column positions.

  • source: The original source file (relative to the sourceRoot).

  • name: An optional original token name for this mapping.

SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for an original source file.

  • sourceFile the URL of the original source file.

  • sourceContent the content of the source file.

SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])

Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.

  • sourceMapConsumer: The SourceMap to be applied.

  • sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.

  • sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.

    This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.

    If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)

SourceMapGenerator.prototype.toString()

Renders the source map being generated to a string.

SourceNode

SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.

new SourceNode([line, column, source[, chunk[, name]]])

  • line: The original line number associated with this source node, or null if it isn’t associated with an original line. The line number is 1-based.

  • column: The original column number associated with this source node, or null if it isn’t associated with an original column. The column number is 0-based.

  • source: The original source’s filename; null if no filename is provided.

  • chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.

  • name: Optional. The original identifier.

SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])

Creates a SourceNode from generated code and a SourceMapConsumer.

  • code: The generated code

  • sourceMapConsumer The SourceMap for the generated code

  • relativePath The optional path that relative sources in sourceMapConsumer should be relative to.

SourceNode.prototype.add(chunk)

Add a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.prepend(chunk)

Prepend a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.

  • sourceFile: The filename of the source file

  • sourceContent: The content of the source file

SourceNode.prototype.walk(fn)

Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.

  • fn: The traversal function.

SourceNode.prototype.walkSourceContents(fn)

Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.

  • fn: The traversal function.

SourceNode.prototype.join(sep)

Like Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.

  • sep: The separator.

SourceNode.prototype.replaceRight(pattern, replacement)

Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.

  • pattern: The pattern to replace.

  • replacement: The thing to replace the pattern with.

SourceNode.prototype.toString()

Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.

SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])

Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.

The arguments are the same as those to new SourceMapGenerator.



sshpk

Parse, convert, fingerprint and use SSH keys (both public and private) in pure node – no ssh-keygen or other external dependencies.

This library has been extracted from node-http-signature (work by Mark Cavage and Dave Eddy) and node-ssh-fingerprint (work by Dave Eddy), with additions (including ECDSA support) by Alex Wilson.

Install

npm install sshpk

Examples

Example output:

type => rsa
size => 2048 bits
comment => foo@foo.com
fingerprint => SHA256:PYC9kPVC6J873CSIbfp0LwYeczP/W4ffObNCuDJ1u5w
old-style fingerprint => a0:c8:ad:6c:32:9a:32:fa:59:cc:a9:8c:0a:0d:6e:bd

More examples: converting between formats:

Signing and verifying:

Matching fingerprints with keys:

Usage

Public keys

parseKey(data[, format = 'auto'[, options]])

Parses a key from a given data format and returns a new Key object.

Parameters

  • data – Either a Buffer or String, containing the key
  • format – String name of format to use, valid options are:
    • auto: choose automatically from all below
    • pem: supports both PKCS#1 and PKCS#8
    • ssh: standard OpenSSH format,
    • pkcs1, pkcs8: variants of pem
    • rfc4253: raw OpenSSH wire format
    • openssh: new post-OpenSSH 6.5 internal format, produced by ssh-keygen -o
    • dnssec: .key file format output by dnssec-keygen etc
    • putty: the PuTTY .ppk file format (supports truncated variant without all the lines from Private-Lines: onwards)
  • options – Optional Object, extra options, with keys:
    • filename – Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messages
    • passphrase – Optional String, encryption passphrase used to decrypt an encrypted PEM file

Key.isKey(obj)

Returns true if the given object is a valid Key object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

Key#type

String, the type of key. Valid options are rsa, dsa, ecdsa.

Key#size

Integer, “size” of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use.

Key#comment

Optional string, a key comment used by some formats (eg the ssh format).

Key#curve

Only present if this.type === 'ecdsa', string containing the name of the named curve used with this key. Possible values include nistp256, nistp384 and nistp521.

Key#toBuffer([format = 'ssh'])

Convert the key into a given data format and return the serialized key as a Buffer.

Parameters

  • format – String name of format to use, for valid options see parseKey()

Key#toString([format = ssh])

Same as this.toBuffer(format).toString().

Key#fingerprint([algorithm = 'sha256'[, hashType = 'ssh']])

Creates a new Fingerprint object representing this Key’s fingerprint.

Parameters

  • algorithm – String name of hash algorithm to use, valid options are md5, sha1, sha256, sha384, sha512
  • hashType – String name of fingerprint hash type to use, valid options are ssh (the type of fingerprint used by OpenSSH, e.g. in ssh-keygen), spki (used by HPKP, some OpenSSL applications)

Key#createVerify([hashAlgorithm])

Creates a crypto.Verifier specialized to use this Key (and the correct public key algorithm to match it). The returned Verifier has the same API as a regular one, except that the verify() function takes only the target signature as an argument.

Parameters

  • hashAlgorithm – optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including sha1, sha256.

v.verify(signature[, format]) Parameters

  • signature – either a Signature object, or a Buffer or String
  • format – optional String, name of format to interpret given String with. Not valid if signature is a Signature or Buffer.

Key#createDiffieHellman()

Key#createDH()

Creates a Diffie-Hellman key exchange object initialized with this key and all necessary parameters. This has the same API as a crypto.DiffieHellman instance, except that functions take Key and PrivateKey objects as arguments, and return them where indicated for.

This is only valid for keys belonging to a cryptosystem that supports DHE or a close analogue (i.e. dsa, ecdsa and curve25519 keys). An attempt to call this function on other keys will yield an Error.

Private keys

parsePrivateKey(data[, format = 'auto'[, options]])

Parses a private key from a given data format and returns a new PrivateKey object.

Parameters

  • data – Either a Buffer or String, containing the key
  • format – String name of format to use, valid options are:
    • auto: choose automatically from all below
    • pem: supports both PKCS#1 and PKCS#8
    • ssh, openssh: new post-OpenSSH 6.5 internal format, produced by ssh-keygen -o
    • pkcs1, pkcs8: variants of pem
    • rfc4253: raw OpenSSH wire format
    • dnssec: .private format output by dnssec-keygen etc.
  • options – Optional Object, extra options, with keys:
    • filename – Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messages
    • passphrase – Optional String, encryption passphrase used to decrypt an encrypted PEM file

generatePrivateKey(type[, options])

Generates a new private key of a certain key type, from random data.

Parameters

  • type – String, type of key to generate. Currently supported are 'ecdsa' and 'ed25519'
  • options – optional Object, with keys:
    • curve – optional String, for 'ecdsa' keys, specifies the curve to use. If ECDSA is specified and this option is not given, defaults to using 'nistp256'.

PrivateKey.isPrivateKey(obj)

Returns true if the given object is a valid PrivateKey object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

PrivateKey#type

String, the type of key. Valid options are rsa, dsa, ecdsa.

PrivateKey#size

Integer, “size” of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use.

PrivateKey#curve

Only present if this.type === 'ecdsa', string containing the name of the named curve used with this key. Possible values include nistp256, nistp384 and nistp521.

PrivateKey#toBuffer([format = 'pkcs1'])

Convert the key into a given data format and return the serialized key as a Buffer.

Parameters

  • format – String name of format to use, valid options are listed under parsePrivateKey. Note that ED25519 keys default to openssh format instead (as they have no pkcs1 representation).

PrivateKey#toString([format = 'pkcs1'])

Same as this.toBuffer(format).toString().

PrivateKey#toPublic()

Extract just the public part of this private key, and return it as a Key object.

PrivateKey#fingerprint([algorithm = 'sha256'])

Same as this.toPublic().fingerprint().

PrivateKey#createVerify([hashAlgorithm])

Same as this.toPublic().createVerify().

PrivateKey#createSign([hashAlgorithm])

Creates a crypto.Sign specialized to use this PrivateKey (and the correct key algorithm to match it). The returned Signer has the same API as a regular one, except that the sign() function takes no arguments, and returns a Signature object.

Parameters

  • hashAlgorithm – optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including sha1, sha256.

v.sign() Parameters

  • none

PrivateKey#derive(newType)

Derives a related key of type newType from this key. Currently this is only supported to change between ed25519 and curve25519 keys which are stored with the same private key (but usually distinct public keys in order to avoid degenerate keys that lead to a weak Diffie-Hellman exchange).

Parameters

  • newType – String, type of key to derive, either ed25519 or curve25519

Fingerprints

parseFingerprint(fingerprint[, options])

Pre-parses a fingerprint, creating a Fingerprint object that can be used to quickly locate a key by using the Fingerprint#matches function.

Parameters

  • fingerprint – String, the fingerprint value, in any supported format
  • options – Optional Object, with properties:
    • algorithms – Array of strings, names of hash algorithms to limit support to. If fingerprint uses a hash algorithm not on this list, throws InvalidAlgorithmError.
    • hashType – String, the type of hash the fingerprint uses, either ssh or spki (normally auto-detected based on the format, but can be overridden)
    • type – String, the entity this fingerprint identifies, either key or certificate

Fingerprint.isFingerprint(obj)

Returns true if the given object is a valid Fingerprint object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

Fingerprint#toString([format])

Returns a fingerprint as a string, in the given format.

Parameters

  • format – Optional String, format to use, valid options are hex and base64. If this Fingerprint uses the md5 algorithm, the default format is hex. Otherwise, the default is base64.

Fingerprint#matches(keyOrCertificate)

Verifies whether or not this Fingerprint matches a given Key or Certificate. This function uses double-hashing to avoid leaking timing information. Returns a boolean.

Note that a Key-type Fingerprint will always return false if asked to match a Certificate and vice versa.

Parameters

  • keyOrCertificate – a Key object or Certificate object, the entity to match this fingerprint against

Signatures

parseSignature(signature, algorithm, format)

Parses a signature in a given format, creating a Signature object. Useful for converting between the SSH and ASN.1 (PKCS/OpenSSL) signature formats, and also returned as output from PrivateKey#createSign().sign().

A Signature object can also be passed to a verifier produced by Key#createVerify() and it will automatically be converted internally into the correct format for verification.

Parameters

  • signature – a Buffer (binary) or String (base64), data of the actual signature in the given format
  • algorithm – a String, name of the algorithm to be used, possible values are rsa, dsa, ecdsa
  • format – a String, either asn1 or ssh

Signature.isSignature(obj)

Returns true if the given object is a valid Signature object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

Signature#toBuffer([format = 'asn1'])

Converts a Signature to the given format and returns it as a Buffer.

Parameters

  • format – a String, either asn1 or ssh

Signature#toString([format = 'asn1'])

Same as this.toBuffer(format).toString('base64').

Certificates

sshpk includes basic support for parsing certificates in X.509 (PEM) format and the OpenSSH certificate format. This feature is intended to be used mainly to access basic metadata about certificates, extract public keys from them, and also to generate simple self-signed certificates from an existing key.

Notably, there is no implementation of CA chain-of-trust verification, and only very minimal support for key usage restrictions. Please do the security world a favour, and DO NOT use this code for certificate verification in the traditional X.509 CA chain style.

parseCertificate(data, format)

Parameters

  • data – a Buffer or String
  • format – a String, format to use, one of 'openssh', 'pem' (X.509 in a PEM wrapper), or 'x509' (raw DER encoded)

createSelfSignedCertificate(subject, privateKey[, options])

Parameters

  • subject – an Identity, the subject of the certificate
  • privateKey – a PrivateKey, the key of the subject: will be used both to be placed in the certificate and also to sign it (since this is a self-signed certificate)
  • options – optional Object, with keys:
    • lifetime – optional Number, lifetime of the certificate from now in seconds
    • validFrom, validUntil – optional Dates, beginning and end of certificate validity period. If given lifetime will be ignored
    • serial – optional Buffer, the serial number of the certificate
    • purposes – optional Array of String, X.509 key usage restrictions

createCertificate(subject, key, issuer, issuerKey[, options])

Parameters

  • subject – an Identity, the subject of the certificate
  • key – a Key, the public key of the subject
  • issuer – an Identity, the issuer of the certificate who will sign it
  • issuerKey – a PrivateKey, the issuer’s private key for signing
  • options – optional Object, with keys:
    • lifetime – optional Number, lifetime of the certificate from now in seconds
    • validFrom, validUntil – optional Dates, beginning and end of certificate validity period. If given lifetime will be ignored
    • serial – optional Buffer, the serial number of the certificate
    • purposes – optional Array of String, X.509 key usage restrictions

Certificate#subjects

Array of Identity instances describing the subject of this certificate.

Certificate#issuer

The Identity of the Certificate’s issuer (signer).

Certificate#subjectKey

The public key of the subject of the certificate, as a Key instance.

Certificate#issuerKey

The public key of the signing issuer of this certificate, as a Key instance. May be undefined if the issuer’s key is unknown (e.g. on an X509 certificate).

Certificate#serial

The serial number of the certificate. As this is normally a 64-bit or wider integer, it is returned as a Buffer.

Certificate#purposes

Array of Strings indicating the X.509 key usage purposes that this certificate is valid for. The possible strings at the moment are:

  • 'signature' – key can be used for digital signatures
  • 'identity' – key can be used to attest about the identity of the signer (X.509 calls this nonRepudiation)
  • 'codeSigning' – key can be used to sign executable code
  • 'keyEncryption' – key can be used to encrypt other keys
  • 'encryption' – key can be used to encrypt data (only applies for RSA)
  • 'keyAgreement' – key can be used for key exchange protocols such as Diffie-Hellman
  • 'ca' – key can be used to sign other certificates (is a Certificate Authority)
  • 'crl' – key can be used to sign Certificate Revocation Lists (CRLs)

Certificate#getExtension(nameOrOid)

Retrieves information about a certificate extension, if present, or returns undefined if not. The string argument nameOrOid should be either the OID (for X509 extensions) or the name (for OpenSSH extensions) of the extension to retrieve.

The object returned will have the following properties:

  • format – String, set to either 'x509' or 'openssh'
  • name or oid – String, only one set based on value of format
  • data – Buffer, the raw data inside the extension

Certificate#getExtensions()

Returns an Array of all present certificate extensions, in the same manner and format as getExtension().

Certificate#isExpired([when])

Tests whether the Certificate is currently expired (i.e. the validFrom and validUntil dates specify a range of time that does not include the current time).

Parameters

  • when – optional Date, if specified, tests whether the Certificate was or will be expired at the specified time instead of now

Returns a Boolean.

Certificate#isSignedByKey(key)

Tests whether the Certificate was validly signed by the given (public) Key.

Parameters

  • key – a Key instance

Returns a Boolean.

Certificate#isSignedBy(certificate)

Tests whether this Certificate was validly signed by the subject of the given certificate. Also tests that the issuer Identity of this Certificate and the subject Identity of the other Certificate are equivalent.

Parameters

  • certificate – another Certificate instance

Returns a Boolean.

Certificate#fingerprint([hashAlgo])

Returns the X509-style fingerprint of the entire certificate (as a Fingerprint instance). This matches what a web-browser or similar would display as the certificate fingerprint and should not be confused with the fingerprint of the subject’s public key.

Parameters

  • hashAlgo – an optional String, any hash function name

Certificate#toBuffer([format])

Serializes the Certificate to a Buffer and returns it.

Parameters

  • format – an optional String, output format, one of 'openssh', 'pem' or 'x509'. Defaults to 'x509'.

Returns a Buffer.

Certificate#toString([format])

  • format – an optional String, output format, one of 'openssh', 'pem' or 'x509'. Defaults to 'pem'.

Returns a String.

Certificate identities

identityForHost(hostname)

Constructs a host-type Identity for a given hostname.

Parameters

  • hostname – the fully qualified DNS name of the host

Returns an Identity instance.

identityForUser(uid)

Constructs a user-type Identity for a given UID.

Parameters

  • uid – a String, user identifier (login name)

Returns an Identity instance.

identityForEmail(email)

Constructs an email-type Identity for a given email address.

Parameters

  • email – a String, email address

Returns an Identity instance.

identityFromDN(dn)

Parses an LDAP-style DN string (e.g. 'CN=foo, C=US') and turns it into an Identity instance.

Parameters

  • dn – a String

Returns an Identity instance.

identityFromArray(arr)

Constructs an Identity from an array of DN components (see Identity#toArray() for the format).

Parameters

  • arr – an Array of Objects, DN components with name and value

Returns an Identity instance.

Attribute name OID
cn 2.5.4.3
o 2.5.4.10
ou 2.5.4.11
l 2.5.4.7
s 2.5.4.8
c 2.5.4.6
sn 2.5.4.4
postalCode 2.5.4.17
serialNumber 2.5.4.5
street 2.5.4.9
x500UniqueIdentifier 2.5.4.45
role 2.5.4.72
telephoneNumber 2.5.4.20
description 2.5.4.13
dc 0.9.2342.19200300.100.1.25
uid 0.9.2342.19200300.100.1.1
mail 0.9.2342.19200300.100.1.3
title 2.5.4.12
gn 2.5.4.42
initials 2.5.4.43
pseudonym 2.5.4.65

Identity#toString()

Returns the identity as an LDAP-style DN string. e.g. 'CN=foo, O=bar corp, C=us'

Identity#type

The type of identity. One of 'host', 'user', 'email' or 'unknown'

Identity#hostname

Identity#uid

Identity#email

Set when type is 'host', 'user', or 'email', respectively. Strings.

Identity#cn

The value of the first CN= in the DN, if any. It’s probably better to use the #get() method instead of this property.

Identity#get(name[, asArray])

Returns the value of a named attribute in the Identity DN. If there is no attribute of the given name, returns undefined. If multiple components of the DN contain an attribute of this name, an exception is thrown unless the asArray argument is given as true – then they will be returned as an Array in the same order they appear in the DN.

Parameters

  • name – a String
  • asArray – an optional Boolean

Identity#toArray()

Returns the Identity as an Array of DN component objects. This looks like:

Each object has a name and a value property. The returned objects may be safely modified.

Errors

InvalidAlgorithmError

The specified algorithm is not valid, either because it is not supported, or because it was not included on a list of allowed algorithms.

Thrown by Fingerprint.parse, Key#fingerprint.

Properties

  • algorithm – the algorithm that could not be validated

FingerprintFormatError

The fingerprint string given could not be parsed as a supported fingerprint format, or the specified fingerprint format is invalid.

Thrown by Fingerprint.parse, Fingerprint#toString.

Properties

  • fingerprint – if caused by a fingerprint, the string value given
  • format – if caused by an invalid format specification, the string value given

KeyParseError

The key data given could not be parsed as a valid key.

Properties

  • keyNamefilename that was given to parseKey
  • format – the format that was trying to parse the key (see parseKey)
  • innerErr – the inner Error thrown by the format parser

KeyEncryptedError

The key is encrypted with a symmetric key (ie, it is password protected). The parsing operation would succeed if it was given the passphrase option.

Properties

  • keyNamefilename that was given to parseKey
  • format – the format that was trying to parse the key (currently can only be "pem")

CertificateParseError

The certificate data given could not be parsed as a valid certificate.

Properties

  • certNamefilename that was given to parseCertificate
  • format – the format that was trying to parse the key (see parseCertificate)
  • innerErr – the inner Error thrown by the format parser

Friends of sshpk

  • sshpk-agent is a library for speaking the ssh-agent protocol from node.js, which uses sshpk


Picomatch

version test status coverage status downloads



Blazing fast and accurate glob matcher written in JavaScript.
No dependencies and full support for standard and extended Bash glob features, including braces, extglobs, POSIX brackets, and regular expressions.



Why picomatch?

  • Lightweight - No dependencies
  • Minimal - Tiny API surface. Main export is a function that takes a glob pattern and returns a matcher function.
  • Fast - Loads in about 2ms (that’s several times faster than a single frame of a HD movie at 60fps)
  • Performant - Use the returned matcher function to speed up repeat matching (like when watching files)
  • Accurate matching - Using wildcards (* and ?), globstars (**) for nested directories, advanced globbing with extglobs, braces, and POSIX brackets, and support for escaping special characters with \ or quotes.
  • Well tested - Thousands of unit tests

See the library comparison to other libraries.



Table of Contents

Click to expand

(TOC generated by verb using markdown-toc)



Install

Install with npm:


Usage

The main export is a function that takes a glob pattern and an options object and returns a function for matching strings.


API

picomatch

Creates a matcher function from one or more glob patterns. The returned function takes a string to match as its first argument, and returns true if the string is a match. The returned matcher function also takes a boolean as the second argument that, when true, returns an object with additional information.

Params

  • globs {String|Array}: One or more glob patterns.
  • options {Object=}
  • returns {Function=}: Returns a matcher function.

Example

.test

Test input with the given regex. This is used by the main picomatch() function to test the input string.

Params

  • input {String}: String to test.
  • regex {RegExp}
  • returns {Object}: Returns an object with matching info.

Example

.matchBase

Match the basename of a filepath.

Params

  • input {String}: String to test.
  • glob {RegExp|String}: Glob pattern or regex created by .makeRe.
  • returns {Boolean}

Example

.isMatch

Returns true if any of the given glob patterns match the specified string.

Params

  • {String|Array}: str The string to test.
  • {String|Array}: patterns One or more glob patterns to use for matching.
  • {Object}: See available options.
  • returns {Boolean}: Returns true if any patterns match str

Example

.parse

Parse a glob pattern to create the source string for a regular expression.

Params

  • pattern {String}
  • options {Object}
  • returns {Object}: Returns an object with useful properties and output to be used as a regex source string.

Example

.scan

Scan a glob pattern to separate the pattern into segments.

Params

  • input {String}: Glob pattern to scan.
  • options {Object}
  • returns {Object}: Returns an object with

Example

.compileRe

Create a regular expression from a parsed glob pattern.

Params

  • state {String}: The object returned from the .parse method.
  • options {Object}
  • returns {RegExp}: Returns a regex created from the given pattern.

Example

.toRegex

Create a regular expression from the given regex source string.

Params

  • source {String}: Regular expression source string.
  • options {Object}
  • returns {RegExp}

Example


Options

Picomatch options

The following options may be used with the main picomatch() function or any of the methods on the picomatch API.

Option Type Default value Description
basename boolean false If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.
bash boolean false Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (**).
capture boolean undefined Return regex matches in supporting methods.
contains boolean undefined Allows glob to match any part of the given string(s).
cwd string process.cwd() Current working directory. Used by picomatch.split()
debug boolean undefined Debug regular expressions when an error is thrown.
dot boolean false Enable dotfile matching. By default, dotfiles are ignored unless a . is explicitly defined in the pattern, or options.dot is true
expandRange function undefined Custom function for expanding ranges in brace patterns, such as {a..z}. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It’s recommended that returned strings be wrapped in parentheses.
failglob boolean false Throws an error if no matches are found. Based on the bash option of the same name.
fastpaths boolean true To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to false.
flags boolean undefined Regex flags to use in the generated regex. If defined, the nocase option will be overridden.
format function undefined Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc.
ignore array\|string undefined One or more glob patterns for excluding strings that should not be matched from the result.
keepQuotes boolean false Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes.
literalBrackets boolean undefined When true, brackets in the glob pattern will be escaped so that only literal brackets will be matched.
lookbehinds boolean true
matchBase boolean false Alias for basename
maxLength boolean 65536 Limit the max length of the input string. An error is thrown if the input string is longer than this value.
nobrace boolean false Disable brace matching, so that {a,b} and {1..3} would be treated as literal characters.
nobracket boolean undefined Disable matching with regex brackets.
nocase boolean false Make matching case-insensitive. Equivalent to the regex i flag. Note that this option is overridden by the flags option.
nodupes boolean true Deprecated, use nounique instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false.
noext boolean false Alias for noextglob
noextglob boolean false Disable support for matching with extglobs (like +(a\|b))
noglobstar boolean false Disable support for matching nested directories with globstars (**)
nonegate boolean false Disable support for negating with leading !
noquantifiers boolean false Disable support for regex quantifiers (like a{1,2}) and treat them as brace patterns to be expanded.
onIgnore function undefined Function to be called on ignored items.
onMatch function undefined Function to be called on matched items.
onResult function undefined Function to be called on all items, regardless of whether or not they are matched or ignored.
posix boolean false
posixSlashes boolean undefined Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself
prepend boolean undefined String to prepend to the generated regex used for matching.
regex boolean false Use regular expression rules for + (instead of matching literal +), and for stars that follow closing parentheses or brackets (as in )* and ]*).
strictBrackets boolean undefined Throw an error if brackets, braces, or parens are imbalanced.
strictSlashes boolean undefined When true, picomatch won’t match trailing slashes with single stars.
unescape boolean undefined Remove backslashes preceding escaped characters in the glob pattern. By default, backslashes are retained.
unixify boolean undefined Alias for posixSlashes, for backwards compatibility.

Scan Options

In addition to the main picomatch options, the following options may also be used with the .scan method.

Option Type Default value Description
tokens boolean false When true, the returned object will include an array of tokens (objects), representing each path “segment” in the scanned glob pattern
parts boolean false When true, the returned object will include an array of strings representing each path “segment” in the scanned glob pattern. This is automatically enabled when options.tokens is true

Example


Options Examples

options.expandRange

Type: function

Default: undefined

Custom function for expanding ranges in brace patterns. The fill-range library is ideal for this purpose, or you can use custom code to do whatever you need.

Example

The following example shows how to create a glob that matches a folder

options.format

Type: function

Default: undefined

Custom function for formatting strings before they’re matched.

Example

options.onMatch

options.onIgnore

options.onResult



Globbing features

Basic globbing

Character Description
* Matches any character zero or more times, excluding path separators. Does not match path separators or hidden files or directories (“dotfiles”), unless explicitly enabled by setting the dot option to true.
** Matches any character zero or more times, including path separators. Note that ** will only match path separators (/, and \\ on Windows) when they are the only characters in a path segment. Thus, foo**/bar is equivalent to foo*/bar, and foo/a**b/bar is equivalent to foo/a*b/bar, and more than two consecutive stars in a glob path segment are regarded as a single star. Thus, foo/***/bar is equivalent to foo/*/bar.
? Matches any character excluding path separators one time. Does not match path separators or leading dots.
[abc] Matches any characters inside the brackets. For example, [abc] would match the characters a, b or c, and nothing else.

Matching behavior vs. Bash

Picomatch’s matching features and expected results in unit tests are based on Bash’s unit tests and the Bash 4.3 specification, with the following exceptions:

  • Bash will match foo/bar/baz with *. Picomatch only matches nested directories with **.
  • Bash greedily matches with negated extglobs. For example, Bash 4.3 says that !(foo)* should match foo and foobar, since the trailing * bracktracks to match the preceding pattern. This is very memory-inefficient, and IMHO, also incorrect. Picomatch would return false for both foo and foobar.


Advanced globbing

Extglobs

Pattern Description
@(pattern) Match only one consecutive occurrence of pattern
*(pattern) Match zero or more consecutive occurrences of pattern
+(pattern) Match one or more consecutive occurrences of pattern
?(pattern) Match zero or one consecutive occurrences of pattern
!(pattern) Match anything but pattern

Examples

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



safe-buffer travis npm downloads javascript style guide

Safer Node.js Buffer API

Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.

Uses the built-in implementation when available.

install

npm install safe-buffer

usage

The goal of this package is to provide a safe replacement for the node.js Buffer.

It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:

api

Class Method: Buffer.from(array)

  • array {Array}

Allocates a new Buffer using an array of octets.

A TypeError will be thrown if array is not an Array.

Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]])

  • arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()
  • byteOffset {Number} Default: 0
  • length {Number} Default: arrayBuffer.length - byteOffset

When passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.

The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.

A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.

Class Method: Buffer.from(buffer)

  • buffer {Buffer}

Copies the passed buffer data onto a new Buffer instance.

A TypeError will be thrown if buffer is not a Buffer.

Class Method: Buffer.from(str[, encoding])

  • str {String} String to encode.
  • encoding {String} Encoding to use, Default: 'utf8'

Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.

A TypeError will be thrown if str is not a string.

Class Method: Buffer.alloc(size[, fill[, encoding]])

  • size {Number}
  • fill {Value} Default: undefined
  • encoding {String} Default: utf8

Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.

The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.

If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:

Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.

A TypeError will be thrown if size is not a number.

Class Method: Buffer.allocUnsafe(size)

  • size {Number}

Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

A TypeError will be thrown if size is not a number.

Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.

Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.

Class Method: Buffer.allocUnsafeSlow(size)

  • size {Number}

Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.

The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.

When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.

However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.

Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.

A TypeError will be thrown if size is not a number.

All the Rest

The rest of the Buffer API is exactly the same as in node.js. See the docs.

Why is Buffer unsafe?

Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.

The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.

Because the Buffer constructor is so powerful, you often see code like this:

But what happens if toHex is called with a Number argument?

Remote Memory Disclosure

If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.

When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.

From the node.js docs:

new Buffer(size)

  • size Number

The underlying memory for Buffer instances created in this way is not initialized. The contents of a newly created Buffer are unknown and could contain sensitive data. Use buf.fill(0) to initialize a Buffer to zeroes.

(Emphasis our own.)

Whenever the programmer intended to create an uninitialized Buffer you often see code like this:

Would this ever be a problem in real code?

Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.

Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.

Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:

In this example, an http client just has to send:

and it will get back 1,000 bytes of uninitialized memory from the server.

This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.

Which real-world packages were vulnerable?

bittorrent-dht

Mathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.

Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.

ws

That got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.

If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.

These were the vulnerable methods:

Here’s a vulnerable socket server with some echo functionality:

socket.send(number) called on the server, will disclose server memory.

Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.

What’s the solution?

It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.

But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.

Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.

A new API: Buffer.allocUnsafe(number)

The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.

How do we fix node.js core?

We sent a PR to node.js core (merged as semver-major) which defends against one case:

In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.

But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.

What’s the real long-term fix?

We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.

We believe the best solution is to:

1. Change new Buffer(number) to return safe, zeroed-out memory

2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)

Update

We now support adding three new APIs:

  • Buffer.from(value) - convert from any type to a buffer
  • Buffer.alloc(size) - create a zero-filled buffer
  • Buffer.allocUnsafe(size) - create an uninitialized buffer with given size

This solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.

This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).

Conclusion

This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.

Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.

Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.

credit

The original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.

Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.

Thanks to John Hiesey for proofreading this README and auditing the code.



node-fetch

npm version build status coverage status install size Discord

A light-weight module that brings window.fetch to Node.js

(We are looking for v2 maintainers and collaborators)

Backers

Motivation

Instead of implementing XMLHttpRequest in Node.js to run browser-specific Fetch polyfill, why not go from native http to fetch API directly? Hence, node-fetch, minimal code for a window.fetch compatible API on Node.js runtime.

See Matt Andrews’ isomorphic-fetch or Leonardo Quixada’s cross-fetch for isomorphic usage (exports node-fetch for server-side, whatwg-fetch for client-side).

Features

  • Stay consistent with window.fetch API.
  • Make conscious trade-off when following WHATWG fetch spec and stream spec implementation details, document known differences.
  • Use native promise but allow substituting it with [insert your favorite promise library].
  • Use native Node streams for body on both request and response.
  • Decode content encoding (gzip/deflate) properly and convert string output (such as res.text() and res.json()) to UTF-8 automatically.
  • Useful extensions such as timeout, redirect limit, response size limit, explicit errors for troubleshooting.

Difference from client-side fetch

  • If you happen to use a missing feature that window.fetch offers, feel free to open an issue.
  • Pull requests are welcomed too!

Installation

Current stable release (2.x)

Loading and configuring the module

We suggest you load the module via require until the stabilization of ES modules in node:

If you are using a Promise library other than native, set it through fetch.Promise:

Common Usage

NOTE: The documentation below is up-to-date with 2.x releases; see the 1.x readme, changelog and 2.x upgrade guide for the differences.

Plain text or HTML

JSON

Simple Post

Post with JSON

Post with form parameters

URLSearchParams is available in Node.js as of v7.5.0. See official documentation for more usage methods.

NOTE: The Content-Type header is only set automatically to x-www-form-urlencoded when an instance of URLSearchParams is given as such:

Handling exceptions

NOTE: 3xx-5xx responses are NOT exceptions and should be handled in then(); see the next section for more information.

Adding a catch to the fetch promise chain will catch all exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the error handling document for more details.

Handling client and server errors

It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses:

Advanced Usage

Streams

The “Node.js way” is to use streams when possible:

Buffer

If you prefer to cache binary data in full, use buffer(). (NOTE: buffer() is a node-fetch-only API)

Accessing Headers and other Meta data

Unlike browsers, you can access raw Set-Cookie headers manually using Headers.raw(). This is a node-fetch only API.

Post data using a file stream

Post with form-data (detect multipart)

Request cancellation with AbortSignal

NOTE: You may cancel streamed requests only on Node >= v8.0.0

You may cancel requests with AbortController. A suggested implementation is abort-controller.

An example of timing out a request after 150ms could be achieved as the following:

See test cases for more examples.

API

fetch(url[, options])

  • url A string representing the URL for fetching
  • options Options for the HTTP(S) request
  • Returns: Promise<Response>

Perform an HTTP(S) fetch.

url should be an absolute url, such as https://example.com/. A path-relative URL (/file/under/root) or protocol-relative URL (//can-be-http-or-https.com/) will result in a rejected Promise.

### Options

The default values are shown after each option key.

Default Headers

If no values are set, the following request headers will be sent automatically:

Header Value
Accept-Encoding gzip,deflate (when options.compress === true)
Accept */*
Connection close (when no options.agent is present)
Content-Length (automatically calculated, if possible)
Transfer-Encoding chunked (when req.body is a stream)
User-Agent node-fetch/1.0 (+https://github.com/bitinn/node-fetch)

Note: when body is a Stream, Content-Length is not set automatically.

Custom Agent

The agent option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following:

  • Use only IPv4 or IPv6
  • Custom DNS Lookup

See http.Agent for more information.

In addition, the agent option accepts a function that returns http(s).Agent instance given current URL, this is useful during a redirection chain across HTTP and HTTPS protocol.

### Class: Request

An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the Body interface.

Due to the nature of Node.js, the following properties are not implemented at this moment:

  • type
  • destination
  • referrer
  • referrerPolicy
  • mode
  • credentials
  • cache
  • integrity
  • keepalive

The following node-fetch extension properties are provided:

  • follow
  • compress
  • counter
  • agent

See options for exact meaning of these extensions.

new Request(input[, options])

(spec-compliant)

  • input A string representing a URL, or another Request (which will be cloned)
  • options [Options][#fetch-options] for the HTTP(S) request

Constructs a new Request object. The constructor is identical to that in the browser.

In most cases, directly fetch(url, options) is simpler than creating a Request object.

### Class: Response

An HTTP(S) response. This class implements the Body interface.

The following properties are not implemented in node-fetch at this moment:

  • Response.error()
  • Response.redirect()
  • type
  • trailer

new Response([body[, options]])

(spec-compliant)

Constructs a new Response object. The constructor is identical to that in the browser.

Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a Response directly.

response.ok

(spec-compliant)

Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300.

response.redirected

(spec-compliant)

Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0.

### Class: Headers

This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the Fetch Standard are implemented.

new Headers([init])

(spec-compliant)

  • init Optional argument to pre-fill the Headers object

Construct a new Headers object. init can be either null, a Headers object, an key-value map object or any iterable object.

### Interface: Body

Body is an abstract interface with methods that are applicable to both Request and Response classes.

The following methods are not yet implemented in node-fetch at this moment:

  • formData()

body.body

(deviation from spec)

Data are encapsulated in the Body object. Note that while the Fetch Standard requires the property to always be a WHATWG ReadableStream, in node-fetch it is a Node.js Readable stream.

body.bodyUsed

(spec-compliant)

  • Boolean

A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again.

body.arrayBuffer()

body.blob()

body.json()

body.text()

(spec-compliant)

  • Returns: Promise

Consume the body and return a promise that will resolve to one of these formats.

body.buffer()

(node-fetch extension)

  • Returns: Promise<Buffer>

Consume the body and return a promise that will resolve to a Buffer.

body.textConverted()

(node-fetch extension)

  • Returns: Promise<String>

Identical to body.text(), except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible.

(This API requires an optional dependency of the npm package encoding, which you need to install manually. webpack users may see a warning message due to this optional dependency.)

### Class: FetchError

(node-fetch extension)

An operational error in the fetching process. See ERROR-HANDLING.md for more info.

### Class: AbortError

(node-fetch extension)

An Error thrown when the request is aborted in response to an AbortSignal’s abort event. It has a name property of AbortError. See ERROR-HANDLING.MD for more info.

Acknowledgement

Thanks to github/fetch for providing a solid implementation reference.

node-fetch v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr).



verror: rich JavaScript errors

This module provides several classes in support of Joyent’s Best Practices for Error Handling in Node.js. If you find any of the behavior here confusing or surprising, check out that document first.

The error classes here support:

  • printf-style arguments for the message
  • chains of causes
  • properties to provide extra information about the error
  • creating your own subclasses that support all of these

The classes here are:

  • VError, for chaining errors while preserving each one’s error message. This is useful in servers and command-line utilities when you want to propagate an error up a call stack, but allow various levels to add their own context. See examples below.
  • WError, for wrapping errors while hiding the lower-level messages from the top-level error. This is useful for API endpoints where you don’t want to expose internal error messages, but you still want to preserve the error chain for logging and debugging.
  • SError, which is just like VError but interprets printf-style arguments more strictly.
  • MultiError, which is just an Error that encapsulates one or more other errors. (This is used for parallel operations that return several errors.)


Quick start

First, install the package:

npm install verror

If nothing else, you can use VError as a drop-in replacement for the built-in JavaScript Error class, with the addition of printf-style messages:

This prints:

missing file: “/etc/passwd”

You can also pass a cause argument, which is any other Error object:

This prints out:

stat “/nonexistent”: ENOENT, stat ‘/nonexistent’

which resembles how Unix programs typically report errors:

$ sort /nonexistent sort: open failed: /nonexistent: No such file or directory

To match the Unixy feel, when you print out the error, just prepend the program’s name to the VError’s message. Or just call node-cmdutil.fail(your_verror), which does this for you.

You can get the next-level Error using err.cause():

prints:

ENOENT, stat ‘/nonexistent’

Of course, you can chain these as many times as you want, and it works with any kind of Error:

This prints:

request failed: failed to stat “/junk”: No such file or directory

The idea is that each layer in the stack annotates the error with a description of what it was doing. The end result is a message that explains what happened at each level.

You can also decorate Error objects with additional information so that callers can not only handle each kind of error differently, but also construct their own error messages (e.g., to localize them, format them, group them by type, and so on). See the example below.



Deeper dive

The two main goals for VError are:

  • Make it easy to construct clear, complete error messages intended for people. Clear error messages greatly improve both user experience and debuggability, so we wanted to make it easy to build them. That’s why the constructor takes printf-style arguments.
  • Make it easy to construct objects with programmatically-accessible metadata (which we call informational properties). Instead of just saying “connection refused while connecting to 192.168.1.2:80”, you can add properties like "ip": "192.168.1.2" and "tcpPort": 80. This can be used for feeding into monitoring systems, analyzing large numbers of Errors (as from a log file), or localizing error messages.

To really make this useful, it also needs to be easy to compose Errors: higher-level code should be able to augment the Errors reported by lower-level code to provide a more complete description of what happened. Instead of saying “connection refused”, you can say “operation X failed: connection refused”. That’s why VError supports causes.

In order for all this to work, programmers need to know that it’s generally safe to wrap lower-level Errors with higher-level ones. If you have existing code that handles Errors produced by a library, you should be able to wrap those Errors with a VError to add information without breaking the error handling code. There are two obvious ways that this could break such consumers:

  • The error’s name might change. People typically use name to determine what kind of Error they’ve got. To ensure compatibility, you can create VErrors with custom names, but this approach isn’t great because it prevents you from representing complex failures. For this reason, VError provides findCauseByName, which essentially asks: does this Error or any of its causes have this specific type? If error handling code uses findCauseByName, then subsystems can construct very specific causal chains for debuggability and still let people handle simple cases easily. There’s an example below.
  • The error’s properties might change. People often hang additional properties off of Error objects. If we wrap an existing Error in a new Error, those properties would be lost unless we copied them. But there are a variety of both standard and non-standard Error properties that should not be copied in this way: most obviously name, message, and stack, but also fileName, lineNumber, and a few others. Plus, it’s useful for some Error subclasses to have their own private properties – and there’d be no way to know whether these should be copied. For these reasons, VError first-classes these information properties. You have to provide them in the constructor, you can only fetch them with the info() function, and VError takes care of making sure properties from causes wind up in the info() output.

Let’s put this all together with an example from the node-fast RPC library. node-fast implements a simple RPC protocol for Node programs. There’s a server and client interface, and clients make RPC requests to servers. Let’s say the server fails with an UnauthorizedError with message “user ‘bob’ is not authorized”. The client wraps all server errors with a FastServerError. The client also wraps all request errors with a FastRequestError that includes the name of the RPC call being made. The result of this failed RPC might look like this:

name: FastRequestError message: “request failed: server error: user ‘bob’ is not authorized” rpcMsgid: rpcMethod: GetObject cause: name: FastServerError message: “server error: user ‘bob’ is not authorized” cause: name: UnauthorizedError message: “user ‘bob’ is not authorized” rpcUser: “bob”

When the caller uses VError.info(), the information properties are collapsed so that it looks like this:

message: “request failed: server error: user ‘bob’ is not authorized” rpcMsgid: rpcMethod: GetObject rpcUser: “bob”

Taking this apart:

  • The error’s message is a complete description of the problem. The caller can report this directly to its caller, which can potentially make its way back to an end user (if appropriate). It can also be logged.
  • The caller can tell that the request failed on the server, rather than as a result of a client problem (e.g., failure to serialize the request), a transport problem (e.g., failure to connect to the server), or something else (e.g., a timeout). They do this using findCauseByName('FastServerError') rather than checking the name field directly.
  • If the caller logs this error, the logs can be analyzed to aggregate errors by cause, by RPC method name, by user, or whatever. Or the error can be correlated with other events for the same rpcMsgid.
  • It wasn’t very hard for any part of the code to contribute to this Error. Each part of the stack has just a few lines to provide exactly what it knows, with very little boilerplate.

It’s not expected that you’d use these complex forms all the time. Despite supporting the complex case above, you can still just do:

new VError(“my service isn’t working”);

for the simple cases.



Reference: VError, WError, SError

VError, WError, and SError are convenient drop-in replacements for Error that support printf-style arguments, first-class causes, informational properties, and other useful features.

Constructors

The VError constructor has several forms:

All of these forms construct a new VError that behaves just like the built-in JavaScript Error class, with some additional methods described below.

In the first form, options is a plain object with any of the following optional properties:

Option name Type Meaning
name string Describes what kind of error this is. This is intended for programmatic use to distinguish between different kinds of errors. Note that in modern versions of Node.js, this name is ignored in the stack property value, but callers can still use the name property to get at it.
cause any Error object Indicates that the new error was caused by cause. See cause() below. If unspecified, the cause will be null.
strict boolean If true, then null and undefined values in sprintf_args are passed through to sprintf(). Otherwise, these are replaced with the strings 'null', and ‘undefined’, respectively.
constructorOpt function If specified, then the stack trace for this error ends at function constructorOpt. Functions called by constructorOpt will not show up in the stack. This is useful when this class is subclassed.
info object Specifies arbitrary informational properties that are available through the VError.info(err) static class method. See that method for details.

The second form is equivalent to using the first form with the specified cause as the error’s cause. This form is distinguished from the first form because the first argument is an Error.

The third form is equivalent to using the first form with all default option values. This form is distinguished from the other forms because the first argument is not an object or an Error.

The WError constructor is used exactly the same way as the VError constructor. The SError constructor is also used the same way as the VError constructor except that in all cases, the strict property is overriden to `true.

Public properties

VError, WError, and SError all provide the same public properties as JavaScript’s built-in Error objects.

Property name Type Meaning
name string Programmatically-usable name of the error.
message string Human-readable summary of the failure. Programmatically-accessible details are provided through VError.info(err) class method.
stack string Human-readable stack trace where the Error was constructed.

For all of these classes, the printf-style arguments passed to the constructor are processed with sprintf() to form a message. For WError, this becomes the complete message property. For SError and VError, this message is prepended to the message of the cause, if any (with a suitable separator), and the result becomes the message property.

The stack property is managed entirely by the underlying JavaScript implementation. It’s generally implemented using a getter function because constructing the human-readable stack trace is somewhat expensive.

Class methods

The following methods are defined on the VError class and as exported functions on the verror module. They’re defined this way rather than using methods on VError instances so that they can be used on Errors not created with VError.

VError.cause(err)

The cause() function returns the next Error in the cause chain for err, or null if there is no next error. See the cause argument to the constructor. Errors can have arbitrarily long cause chains. You can walk the cause chain by invoking VError.cause(err) on each subsequent return value. If err is not a VError, the cause is null.

VError.info(err)

Returns an object with all of the extra error information that’s been associated with this Error and all of its causes. These are the properties passed in using the info option to the constructor. Properties not specified in the constructor for this Error are implicitly inherited from this error’s cause.

These properties are intended to provide programmatically-accessible metadata about the error. For an error that indicates a failure to resolve a DNS name, informational properties might include the DNS name to be resolved, or even the list of resolvers used to resolve it. The values of these properties should generally be plain objects (i.e., consisting only of null, undefined, numbers, booleans, strings, and objects and arrays containing only other plain objects).

VError.fullStack(err)

Returns a string containing the full stack trace, with all nested errors recursively reported as 'caused by:' + err.stack.

VError.findCauseByName(err, name)

The findCauseByName() function traverses the cause chain for err, looking for an error whose name property matches the passed in name value. If no match is found, null is returned.

If all you want is to know whether there’s a cause (and you don’t care what it is), you can use VError.hasCauseWithName(err, name).

If a vanilla error or a non-VError error is passed in, then there is no cause chain to traverse. In this scenario, the function will check the name property of only err.

VError.hasCauseWithName(err, name)

Returns true if and only if VError.findCauseByName(err, name) would return a non-null value. This essentially determines whether err has any cause in its cause chain that has name name.

VError.errorFromList(errors)

Given an array of Error objects (possibly empty), return a single error representing the whole collection of errors. If the list has:

  • 0 elements, returns null
  • 1 element, returns the sole error
  • more than 1 element, returns a MultiError referencing the whole list

This is useful for cases where an operation may produce any number of errors, and you ultimately want to implement the usual callback(err) pattern. You can accumulate the errors in an array and then invoke callback(VError.errorFromList(errors)) when the operation is complete.

VError.errorForEach(err, func)

Convenience function for iterating an error that may itself be a MultiError.

In all cases, err must be an Error. If err is a MultiError, then func is invoked as func(errorN) for each of the underlying errors of the MultiError. If err is any other kind of error, func is invoked once as func(err). In all cases, func is invoked synchronously.

This is useful for cases where an operation may produce any number of warnings that may be encapsulated with a MultiError – but may not be.

This function does not iterate an error’s cause chain.

Examples

The “Demo” section above covers several basic cases. Here’s a more advanced case:

This outputs:

failed to connect to “127.0.0.1:215”: something bad happened ConnectionError { errno: ‘ECONNREFUSED’, remote_ip: ‘127.0.0.1’, port: 215 } ConnectionError: failed to connect to “127.0.0.1:215”: something bad happened at Object. (/home/dap/node-verror/examples/info.js:5:12) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:935:3

Information properties are inherited up the cause chain, with values at the top of the chain overriding same-named values lower in the chain. To continue that example:

This outputs:

request failed: failed to connect to “127.0.0.1:215”: something bad happened RequestError { errno: ‘EBADREQUEST’, remote_ip: ‘127.0.0.1’, port: 215 } RequestError: request failed: failed to connect to “127.0.0.1:215”: something bad happened at Object. (/home/dap/node-verror/examples/info.js:20:12) at Module._compile (module.js:456:26) at Object.Module._extensions..js (module.js:474:10) at Module.load (module.js:356:32) at Function.Module._load (module.js:312:12) at Function.Module.runMain (module.js:497:10) at startup (node.js:119:16) at node.js:935:3

You can also print the complete stack trace of combined Errors by using VError.fullStack(err).

This outputs:

VError: something really bad happened here: something bad happened at Object. (/home/dap/node-verror/examples/fullStack.js:5:12) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Function.Module.runMain (module.js:441:10) at startup (node.js:139:18) at node.js:968:3 caused by: VError: something bad happened at Object. (/home/dap/node-verror/examples/fullStack.js:3:12) at Module._compile (module.js:409:26) at Object.Module._extensions..js (module.js:416:10) at Module.load (module.js:343:32) at Function.Module._load (module.js:300:12) at Function.Module.runMain (module.js:441:10) at startup (node.js:139:18) at node.js:968:3

VError.fullStack is also safe to use on regular Errors, so feel free to use it whenever you need to extract the stack trace from an Error, regardless if it’s a VError or not.



Reference: MultiError

MultiError is an Error class that represents a group of Errors. This is used when you logically need to provide a single Error, but you want to preserve information about multiple underying Errors. A common case is when you execute several operations in parallel and some of them fail.

MultiErrors are constructed as:

error_list is an array of at least one Error object.

The cause of the MultiError is the first error provided. None of the other VError options are supported. The message for a MultiError consists the message from the first error, prepended with a message indicating that there were other errors.

For example:

outputs:

first of 2 errors: failed to resolve DNS name “abc.example.com”

See the convenience function VError.errorFromList, which is sometimes simpler to use than this constructor.

Public methods

errors()

Returns an array of the errors used to construct this MultiError.



Contributing

See separate contribution guidelines.



braces Donate NPM version NPM monthly downloads NPM total downloads Linux Build Status

Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

v3.0.0 Released!!

See the changelog for details.

Why use braces?

Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters.

  • fast and performant - Starts fast, runs fast and scales well as patterns increase in complexity.
  • Organized code base - The parser and compiler are easy to maintain and update when edge cases crop up.
  • Well-tested - Thousands of test assertions, and passes all of the Bash, minimatch, and brace-expansion unit tests (as of the date this was written).
  • Safer - You shouldn’t have to worry about users defining aggressive or malicious brace patterns that can break your application. Braces takes measures to prevent malicious regex that can be used for DDoS attacks (see catastrophic backtracking).

Usage

The main export is a function that takes one or more brace patterns and options.

Brace Expansion vs. Compilation

By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching.

Compiled

Expanded

Enable brace expansion by setting the expand option to true, or by using braces.expand() (returns an array similar to what you’d expect from Bash, or echo {1..5}, or minimatch):

Lists

Expand lists (like Bash “sets”):

Sequences

Expand ranges of characters (like Bash “sequences”):

See fill-range for all available range-expansion options.

Steppped ranges

Steps, or increments, may be used with ranges:

When the .optimize method is used, or options.optimize is set to true, sequences are passed to to-regex-range for expansion.

Nesting

Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved.

“Expanded” braces

“Optimized” braces

Escaping

Escaping braces

A brace pattern will not be expanded or evaluted if either the opening or closing brace is escaped:

Escaping commas

Commas inside braces may also be escaped:

Single items

Following bash conventions, a brace pattern is also not expanded when it contains a single character:

Options

options.maxLength

Type: Number

Default: 65,536

Description: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera.

options.expand

Type: Boolean

Default: undefined

Description: Generate an “expanded” brace pattern (alternatively you can use the braces.expand() method, which does the same thing).

options.nodupes

Type: Boolean

Default: undefined

Description: Remove duplicates from the returned array.

options.rangeLimit

Type: Number

Default: 1000

Description: To prevent malicious patterns from being passed by users, an error is thrown when braces.expand() is used or options.expand is true and the generated range will exceed the rangeLimit.

You can customize options.rangeLimit or set it to Inifinity to disable this altogether.

Examples

options.transform

Type: Function

Default: undefined

Description: Customize range expansion.

Example: Transforming non-numeric values

Example: Transforming numeric values

options.quantifiers

Type: Boolean

Default: undefined

Description: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, a{1,3} will match the letter a one to three times.

Unfortunately, regex quantifiers happen to share the same syntax as Bash lists

The quantifiers option tells braces to detect when regex quantifiers are defined in the given pattern, and not to try to expand them as lists.

Examples

options.unescape

Type: Boolean

Default: undefined

Description: Strip backslashes that were used for escaping from the result.

What is “brace expansion”?

Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs).

In addition to “expansion”, braces are also used for matching. In other words:

More about brace expansion (click to expand)

There are two main types of brace expansion:

  1. lists: which are defined using comma-separated values inside curly braces: {a,b,c}
  2. sequences: which are defined using a starting value and an ending value, separated by two dots: a{1..3}b. Optionally, a third argument may be passed to define a “step” or increment to use: a{1..100..10}b. These are also sometimes referred to as “ranges”.

Here are some example brace patterns to illustrate how they work:

Sets

{a,b,c}       => a b c
{a,b,c}{1,2}  => a1 a2 b1 b2 c1 c2

Sequences

{1..9}        => 1 2 3 4 5 6 7 8 9
{4..-4}       => 4 3 2 1 0 -1 -2 -3 -4
{1..20..3}    => 1 4 7 10 13 16 19
{a..j}        => a b c d e f g h i j
{j..a}        => j i h g f e d c b a
{a..z..3}     => a d g j m p s v y

Combination

Sets and sequences can be mixed together or used along with any other strings.

{a,b,c}{1..3}   => a1 a2 a3 b1 b2 b3 c1 c2 c3
foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar

The fact that braces can be “expanded” from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases.

Brace matching

In addition to expansion, brace patterns are also useful for performing regular-expression-like matching.

For example, the pattern foo/{1..3}/bar would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar

But not:

baz/1/qux
baz/2/qux
baz/3/qux

Braces can also be combined with glob patterns to perform more advanced wildcard matching. For example, the pattern */{1..3}/* would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux

Brace matching pitfalls

Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of.

tldr

“brace bombs”

  • brace expansion can eat up a huge amount of processing resources
  • as brace patterns increase linearly in size, the system resources required to expand the pattern increase exponentially
  • users can accidentally (or intentially) exhaust your system’s resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!)

For a more detailed explanation with examples, see the geometric complexity section.

The solution

Jump to the performance section to see how Braces solves this problem in comparison to other libraries.

Geometric complexity

At minimum, brace patterns with sets limited to two elements have quadradic or O(n^2) complexity. But the complexity of the algorithm increases exponentially as the number of sets, and elements per set, increases, which is O(n^c).

For example, the following sets demonstrate quadratic (O(n^2)) complexity:

{1,2}{3,4}      => (2X2)    => 13 14 23 24
{1,2}{3,4}{5,6} => (2X2X2)  => 135 136 145 146 235 236 245 246

But add an element to a set, and we get a n-fold Cartesian product with O(n^c) complexity:

{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 
                                    249 257 258 259 267 268 269 347 348 349 357 
                                    358 359 367 368 369

Now, imagine how this complexity grows given that each element is a n-tuple:

{1..100}{1..100}         => (100X100)     => 10,000 elements (38.4 kB)
{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB)

Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control.

More information

Interested in learning more about brace expansion?

Performance

Braces is not only screaming fast, it’s also more accurate the other brace expansion libraries.

Better algorithms

Fortunately there is a solution to the “brace bomb” problem: don’t expand brace patterns into an array when they’re used for matching.

Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently.

The proof is in the numbers

Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using braces() and minimatch.braceExpand(), respectively.

Pattern braces minimatch
{1..9007199254740991}[^1] 298 B (5ms 459μs) N/A (freezes)
{1..1000000000000000} 41 B (1ms 15μs) N/A (freezes)
{1..100000000000000} 40 B (890μs) N/A (freezes)
{1..10000000000000} 39 B (2ms 49μs) N/A (freezes)
{1..1000000000000} 38 B (608μs) N/A (freezes)
{1..100000000000} 37 B (397μs) N/A (freezes)
{1..10000000000} 35 B (983μs) N/A (freezes)
{1..1000000000} 34 B (798μs) N/A (freezes)
{1..100000000} 33 B (733μs) N/A (freezes)
{1..10000000} 32 B (5ms 632μs) 78.89 MB (16s 388ms 569μs)
{1..1000000} 31 B (1ms 381μs) 6.89 MB (1s 496ms 887μs)
{1..100000} 30 B (950μs) 588.89 kB (146ms 921μs)
{1..10000} 29 B (1ms 114μs) 48.89 kB (14ms 187μs)
{1..1000} 28 B (760μs) 3.89 kB (1ms 453μs)
{1..100} 22 B (345μs) 291 B (196μs)
{1..10} 10 B (533μs) 20 B (37μs)
{1..3} 7 B (190μs) 5 B (27μs)

Faster algorithms

When you need expansion, braces is still much faster.

(the following results were generated using braces.expand() and minimatch.braceExpand(), respectively)

Pattern braces minimatch
{1..10000000} 78.89 MB (2s 698ms 642μs) 78.89 MB (18s 601ms 974μs)
{1..1000000} 6.89 MB (458ms 576μs) 6.89 MB (1s 491ms 621μs)
{1..100000} 588.89 kB (20ms 728μs) 588.89 kB (156ms 919μs)
{1..10000} 48.89 kB (2ms 202μs) 48.89 kB (13ms 641μs)
{1..1000} 3.89 kB (1ms 796μs) 3.89 kB (1ms 958μs)
{1..100} 291 B (424μs) 291 B (211μs)
{1..10} 20 B (487μs) 20 B (72μs)
{1..3} 5 B (166μs) 5 B (27μs)

If you’d like to run these comparisons yourself, see test/support/generate.js.

Benchmarks

Running benchmarks

Install dev dependencies:

Latest results

Braces is more accurate, without sacrificing performance.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Commits Contributor
197 jonschlinkert
4 doowb
1 es128
1 eush77
1 hemanth
1 wtgtybhertgeghgtwtg

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.8.0, on April 08, 2019. # snapdragon-util NPM version NPM monthly downloads NPM total downloads Linux Build Status

Utilities for the snapdragon parser/compiler.

Table of Contents

Install

Install with npm:

Install with yarn:

Usage

API

.isNode

Returns true if the given value is a node.

Params

Example

.noop

Emit an empty string for the given node.

Params

Example

.identity

Appdend node.val to compiler.output, exactly as it was created by the parser.

Params

Example

.append

Previously named .emit, this method appends the given val to compiler.output for the given node. Useful when you know what value should be appended advance, regardless of the actual value of node.val.

Params

  • node {Object}: Instance of snapdragon-node
  • returns {Function}: Returns a compiler middleware function.

Example

.toNoop

Used in compiler middleware, this onverts an AST node into an empty text node and deletes node.nodes if it exists. The advantage of this method is that, as opposed to completely removing the node, indices will not need to be re-calculated in sibling nodes, and nothing is appended to the output.

Params

  • node {Object}: Instance of snapdragon-node
  • nodes {Array}: Optionally pass a new nodes value, to replace the existing node.nodes array.

Example

.visit

Visit node with the given fn. The built-in .visit method in snapdragon automatically calls registered compilers, this allows you to pass a visitor function.

Params

  • node {Object}: Instance of snapdragon-node
  • fn {Function}
  • returns {Object}: returns the node after recursively visiting all child nodes.

Example

.mapVisit

Map visit the given fn over node.nodes. This is called by visit, use this method if you do not want fn to be called on the first node.

Params

  • node {Object}: Instance of snapdragon-node
  • options {Object}
  • fn {Function}
  • returns {Object}: returns the node

Example

.addOpen

Unshift an *.open node onto node.nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • Node {Function}: (required) Node constructor function from snapdragon-node.
  • filter {Function}: Optionaly specify a filter function to exclude the node.
  • returns {Object}: Returns the created opening node.

Example

.addClose

Push a *.close node onto node.nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • Node {Function}: (required) Node constructor function from snapdragon-node.
  • filter {Function}: Optionaly specify a filter function to exclude the node.
  • returns {Object}: Returns the created closing node.

Example

.wrapNodes

Wraps the given node with *.open and *.close nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • Node {Function}: (required) Node constructor function from snapdragon-node.
  • filter {Function}: Optionaly specify a filter function to exclude the node.
  • returns {Object}: Returns the node

.pushNode

Push the given node onto parent.nodes, and set parent as `node.parent.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Object}: Returns the child node

Example

.unshiftNode

Unshift node onto parent.nodes, and set parent as `node.parent.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {undefined}

Example

.popNode

Pop the last node off of parent.nodes. The advantage of using this method is that it checks for node.nodes and works with any version of snapdragon-node.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Number|Undefined}: Returns the length of node.nodes or undefined.

Example

.shiftNode

Shift the first node off of parent.nodes. The advantage of using this method is that it checks for node.nodes and works with any version of snapdragon-node.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Number|Undefined}: Returns the length of node.nodes or undefined.

Example

.removeNode

Remove the specified node from parent.nodes.

Params

  • parent {Object}
  • node {Object}: Instance of snapdragon-node
  • returns {Object|undefined}: Returns the removed node, if successful, or undefined if it does not exist on parent.nodes.

Example

.isType

Returns true if node.type matches the given type. Throws a TypeError if node is not an instance of Node.

Params

  • node {Object}: Instance of snapdragon-node
  • type {String}
  • returns {Boolean}

Example

.hasType

Returns true if the given node has the given type in node.nodes. Throws a TypeError if node is not an instance of Node.

Params

  • node {Object}: Instance of snapdragon-node
  • type {String}
  • returns {Boolean}

Example

.firstOfType

Returns the first node from node.nodes of the given type

Params

  • nodes {Array}
  • type {String}
  • returns {Object|undefined}: Returns the first matching node or undefined.

Example

.findNode

Returns the node at the specified index, or the first node of the given type from node.nodes.

Params

  • nodes {Array}
  • type {String|Number}: Node type or index.
  • returns {Object}: Returns a node or undefined.

Example

.isOpen

Returns true if the given node is an "*.open" node.

Params

Example

.isClose

Returns true if the given node is a "*.close" node.

Params

Example

.hasOpen

Returns true if node.nodes has an .open node

Params

Example

.hasClose

Returns true if node.nodes has a .close node

Params

Example

.hasOpenAndClose

Returns true if node.nodes has both .open and .close nodes

Params

Example

.addType

Push the given node onto the state.inside array for the given type. This array is used as a specialized “stack” for only the given node.type.

Params

  • state {Object}: The compiler.state object or custom state object.
  • node {Object}: Instance of snapdragon-node
  • returns {Array}: Returns the state.inside stack for the given type.

Example

.removeType

Remove the given node from the state.inside array for the given type. This array is used as a specialized “stack” for only the given node.type.

Params

  • state {Object}: The compiler.state object or custom state object.
  • node {Object}: Instance of snapdragon-node
  • returns {Array}: Returns the state.inside stack for the given type.

Example

.isEmpty

Returns true if node.val is an empty string, or node.nodes does not contain any non-empty text nodes.

Params

  • node {Object}: Instance of snapdragon-node
  • fn {Function}
  • returns {Boolean}

Example

.isInsideType

Returns true if the state.inside stack for the given type exists and has one or more nodes on it.

Params

  • state {Object}
  • type {String}
  • returns {Boolean}

Example

.isInside

Returns true if node is either a child or grand-child of the given type, or state.inside[type] is a non-empty array.

Params

  • state {Object}: Either the compiler.state object, if it exists, or a user-supplied state object.
  • node {Object}: Instance of snapdragon-node
  • type {String}: The node.type to check for.
  • returns {Boolean}

Example

.last

Get the last n element from the given array. Used for getting a node from node.nodes.

Params

  • array {Array}
  • n {Number}
  • returns {undefined}

.arrayify

Cast the given val to an array.

Params

  • val {any}
  • returns {Array}

Example

.stringify

Convert the given val to a string by joining with ,. Useful for creating a cheerio/CSS/DOM-style selector from a list of strings.

Params

  • val {any}
  • returns {Array}

.trim

Ensure that the given value is a string and call .trim() on it, or return an empty string.

Params

  • str {String}
  • returns {String}

Release history

Changelog entries are classified using the following labels from keep-a-changelog:

  • added: for new features
  • changed: for changes in existing functionality
  • deprecated: for once-stable features removed in upcoming releases
  • removed: for deprecated features removed in this release
  • fixed: for any bug fixes

Custom labels used in this changelog:

  • dependencies: bumps dependencies
  • housekeeping: code re-organization, minor edits, or other changes that don’t fit in one of the other categories.

[3.0.0] - 2017-05-01

Changed

  • .emit was renamed to .append
  • .addNode was renamed to .pushNode
  • .getNode was renamed to .findNode
  • .isEmptyNodes was renamed to .isEmpty: also now works with node.nodes and/or node.val

Added

[0.1.0]

First release.

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Running tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on May 01, 2017. semver(1) – The semantic versioner for npm ===========================================

Install

Usage

As a node module:

You can also just load the module for the function that you care about, if you’d like to minimize your footprint.

// load the whole API at once in a single object
const semver = require('semver')

// or just load the bits you need
// all of them listed here, just pick and choose what you want

// classes
const SemVer = require('semver/classes/semver')
const Comparator = require('semver/classes/comparator')
const Range = require('semver/classes/range')

// functions for working with versions
const semverParse = require('semver/functions/parse')
const semverValid = require('semver/functions/valid')
const semverClean = require('semver/functions/clean')
const semverInc = require('semver/functions/inc')
const semverDiff = require('semver/functions/diff')
const semverMajor = require('semver/functions/major')
const semverMinor = require('semver/functions/minor')
const semverPatch = require('semver/functions/patch')
const semverPrerelease = require('semver/functions/prerelease')
const semverCompare = require('semver/functions/compare')
const semverRcompare = require('semver/functions/rcompare')
const semverCompareLoose = require('semver/functions/compare-loose')
const semverCompareBuild = require('semver/functions/compare-build')
const semverSort = require('semver/functions/sort')
const semverRsort = require('semver/functions/rsort')

// low-level comparators between versions
const semverGt = require('semver/functions/gt')
const semverLt = require('semver/functions/lt')
const semverEq = require('semver/functions/eq')
const semverNeq = require('semver/functions/neq')
const semverGte = require('semver/functions/gte')
const semverLte = require('semver/functions/lte')
const semverCmp = require('semver/functions/cmp')
const semverCoerce = require('semver/functions/coerce')

// working with ranges
const semverSatisfies = require('semver/functions/satisfies')
const semverMaxSatisfying = require('semver/ranges/max-satisfying')
const semverMinSatisfying = require('semver/ranges/min-satisfying')
const semverToComparators = require('semver/ranges/to-comparators')
const semverMinVersion = require('semver/ranges/min-version')
const semverValidRange = require('semver/ranges/valid')
const semverOutside = require('semver/ranges/outside')
const semverGtr = require('semver/ranges/gtr')
const semverLtr = require('semver/ranges/ltr')
const semverIntersects = require('semver/ranges/intersects')
const simplifyRange = require('semver/ranges/simplify')
const rangeSubset = require('semver/ranges/subset')

As a command-line utility:

$ semver -h

A JavaScript implementation of the https://semver.org/ specification

Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence

Options:
-r --range <range>
        Print versions that match the specified range.

-i --increment [<level>]
        Increment a version by the specified level.  Level can
        be one of: major, minor, patch, premajor, preminor,
        prepatch, or prerelease.  Default level is 'patch'.
        Only one version may be specified.

--preid <identifier>
        Identifier to be used to prefix premajor, preminor,
        prepatch or prerelease version increments.

-l --loose
        Interpret versions and ranges loosely

-p --include-prerelease
        Always include prerelease versions in range matching

-c --coerce
        Coerce a string into SemVer if possible
        (does not imply --loose)

--rtl
        Coerce version strings right to left

--ltr
        Coerce version strings left to right (default)

Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.

If no satisfying versions are found, then exits failure.

Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.

Versions

A “version” is described by the v2.0.0 specification found at https://semver.org/.

A leading "=" or "v" character is stripped off and ignored.

Ranges

A version range is a set of comparators which specify versions that satisfy the range.

A comparator is composed of an operator and a version. The set of primitive operators is:

  • < Less than
  • <= Less than or equal to
  • > Greater than
  • >= Greater than or equal to
  • = Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.

For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.

Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.

A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.

For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.

The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.

Prerelease Tags

If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.

For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.

The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.

Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.

Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.

Prerelease Identifiers

The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:

command-line example:

Which then can be used to increment further:

Advanced Range Syntax

Advanced range syntax desugars to primitive comparators in deterministic ways.

Advanced ranges may be combined in the same way as primitive comparators using white space or ||.

Hyphen Ranges X.Y.Z - A.B.C

Specifies an inclusive set.

  • 1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4

If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.

  • 1.2 - 2.3.4 := >=1.2.0 <=2.3.4

If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.

  • 1.2.3 - 2.3 := >=1.2.3 <2.4.0-0
  • 1.2.3 - 2 := >=1.2.3 <3.0.0-0

X-Ranges 1.2.x 1.X 1.2.* *

Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.

  • * := >=0.0.0 (Any version satisfies)
  • 1.x := >=1.0.0 <2.0.0-0 (Matching major version)
  • 1.2.x := >=1.2.0 <1.3.0-0 (Matching major and minor versions)

A partial version range is treated as an X-Range, so the special character is in fact optional.

  • "" (empty string) := * := >=0.0.0
  • 1 := 1.x.x := >=1.0.0 <2.0.0-0
  • 1.2 := 1.2.x := >=1.2.0 <1.3.0-0

Tilde Ranges ~1.2.3 ~1.2 ~1

Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.

  • ~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0-0
  • ~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0-0 (Same as 1.2.x)
  • ~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0-0 (Same as 1.x)
  • ~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0-0
  • ~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0-0 (Same as 0.2.x)
  • ~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0-0 (Same as 0.x)
  • ~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0-0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.

Caret Ranges ^1.2.3 ^0.2.5 ^0.0.4

Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.

Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.

Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.

  • ^1.2.3 := >=1.2.3 <2.0.0-0
  • ^0.2.3 := >=0.2.3 <0.3.0-0
  • ^0.0.3 := >=0.0.3 <0.0.4-0
  • ^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0-0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.
  • ^0.0.3-beta := >=0.0.3-beta <0.0.4-0 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.

When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.

  • ^1.2.x := >=1.2.0 <2.0.0-0
  • ^0.0.x := >=0.0.0 <0.1.0-0
  • ^0.0 := >=0.0.0 <0.1.0-0

A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.

  • ^1.x := >=1.0.0 <2.0.0-0
  • ^0.x := >=0.0.0 <1.0.0-0

Range Grammar

Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:

range-set  ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range      ::= hyphen | simple ( ' ' simple ) * | ''
hyphen     ::= partial ' - ' partial
simple     ::= primitive | partial | tilde | caret
primitive  ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial    ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr         ::= 'x' | 'X' | '*' | nr
nr         ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde      ::= '~' partial
caret      ::= '^' partial
qualifier  ::= ( '-' pre )? ( '+' build )?
pre        ::= parts
build      ::= parts
parts      ::= part ( '.' part ) *
part       ::= nr | [-0-9A-Za-z]+

Functions

All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:

  • loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.
  • includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.

Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.

  • valid(v): Return the parsed version, or null if it’s not valid.
  • inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
    • premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.
    • If called from a non-prerelease version, the prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.
  • prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]
  • major(v): Return the major version number.
  • minor(v): Return the minor version number.
  • patch(v): Return the patch version number.
  • intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.
  • parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.

Comparison

  • gt(v1, v2): v1 > v2
  • gte(v1, v2): v1 >= v2
  • lt(v1, v2): v1 < v2
  • lte(v1, v2): v1 <= v2
  • eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.
  • neq(v1, v2): v1 != v2 The opposite of eq.
  • cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.
  • compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().
  • rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().
  • compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().
  • diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.

Comparators

  • intersects(comparator): Return true if the comparators intersect

Ranges

  • validRange(range): Return the valid range or null if it’s not valid
  • satisfies(version, range): Return true if the version satisfies the range.
  • maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.
  • minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.
  • minVersion(range): Return the lowest version that can possibly match the given range.
  • gtr(version, range): Return true if version is greater than all the versions possible in the range.
  • ltr(version, range): Return true if version is less than all the versions possible in the range.
  • outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)
  • intersects(range): Return true if any of the ranges comparators intersect
  • simplifyRange(versions, range): Return a “simplified” range that matches the same items in versions list as the range specified. Note that it does not guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with || programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned.
  • subset(subRange, superRange): Return true if the subRange range is entirely contained by the superRange range.

Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.

If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.

Coercion

  • coerce(version, options): Coerces a string to semver if possible

This aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).

If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.

Clean

  • clean(version): Clean a string to be a valid semver if possible

This will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.

ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null

Exported Modules

You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main require('semver') export uses getter functions to lazily load the parts of the API that are used.

The following modules are available:

  • require('semver')
  • require('semver/classes')
  • require('semver/classes/comparator')
  • require('semver/classes/range')
  • require('semver/classes/semver')
  • require('semver/functions/clean')
  • require('semver/functions/cmp')
  • require('semver/functions/coerce')
  • require('semver/functions/compare')
  • require('semver/functions/compare-build')
  • require('semver/functions/compare-loose')
  • require('semver/functions/diff')
  • require('semver/functions/eq')
  • require('semver/functions/gt')
  • require('semver/functions/gte')
  • require('semver/functions/inc')
  • require('semver/functions/lt')
  • require('semver/functions/lte')
  • require('semver/functions/major')
  • require('semver/functions/minor')
  • require('semver/functions/neq')
  • require('semver/functions/parse')
  • require('semver/functions/patch')
  • require('semver/functions/prerelease')
  • require('semver/functions/rcompare')
  • require('semver/functions/rsort')
  • require('semver/functions/satisfies')
  • require('semver/functions/sort')
  • require('semver/functions/valid')
  • require('semver/ranges/gtr')
  • require('semver/ranges/intersects')
  • require('semver/ranges/ltr')
  • require('semver/ranges/max-satisfying')
  • require('semver/ranges/min-satisfying')
  • require('semver/ranges/min-version')
  • require('semver/ranges/outside')
  • require('semver/ranges/to-comparators')
  • require('semver/ranges/valid')


fast-glob

It’s a very fast and efficient glob library for Node.js.

This package provides methods for traversing the file system and returning pathnames that matched a defined set of a specified pattern according to the rules used by the Unix Bash shell with some simplifications, meanwhile results are returned in arbitrary order. Quick, simple, effective.

Table of Contents

Details

Highlights

  • Fast. Probably the fastest.
  • Synchronous, Promise and Stream API.
  • Object mode. Can return more than just strings.
  • Error-tolerant.

Donation

Donate

Old and modern mode

This package works in two modes, depending on the environment in which it is used.

  • Old mode. Node.js below 10.10 or when the stats option is enabled.
  • Modern mode. Node.js 10.10+ and the stats option is disabled.

The modern mode is faster. Learn more about the internal mechanism.

Pattern syntax

:warning: Always use forward-slashes in glob expressions (patterns and ignore option). Use backslashes for escaping characters.

There is more than one form of syntax: basic and advanced. Below is a brief overview of the supported features. Also pay attention to our FAQ.

:book: This package uses a micromatch as a library for pattern matching.

Basic syntax

  • An asterisk (*) — matches everything except slashes (path separators), hidden files (names starting with .).
  • A double star or globstar (**) — matches zero or more directories.
  • Question mark (?) – matches any single character except slashes (path separators).
  • Sequence ([seq]) — matches any character in sequence.

:book: A few additional words about the basic matching behavior.

Some examples:

  • src/**/*.js — matches all files in the src directory (any level of nesting) that have the .js extension.
  • src/*.?? — matches all files in the src directory (only first level of nesting) that have a two-character extension.
  • file-[01].js — matches files: file-0.js, file-1.js.

Advanced syntax

:book: A few additional words about the advanced matching behavior.

Some examples:

  • src/**/*.{css,scss} — matches all files in the src directory (any level of nesting) that have the .css or .scss extension.
  • file-[[:digit:]].js — matches files: file-0.js, file-1.js, …, file-9.js.
  • file-{1..3}.js — matches files: file-1.js, file-2.js, file-3.js.
  • file-(1|2) — matches files: file-1.js, file-2.js.

Installation

npm install fast-glob

API

Asynchronous

Returns a Promise with an array of matching entries.

Synchronous

Returns an array of matching entries.

Stream

Returns a ReadableStream when the data event will be emitted with matching entry.

patterns

  • Required: true
  • Type: string | string[]

Any correct pattern(s).

:1234: Pattern syntax

:warning: This package does not respect the order of patterns. First, all the negative patterns are applied, and only then the positive patterns. If you want to get a certain order of records, use sorting or split calls.

options

See Options section.

Helpers

generateTasks(patterns, [options])

Returns the internal representation of patterns (Task is a combining patterns by base directory).

patterns
  • Required: true
  • Type: string | string[]

Any correct pattern(s).

options

See Options section.

isDynamicPattern(pattern, [options])

Returns true if the passed pattern is a dynamic pattern.

:1234: What is a static or dynamic pattern?

pattern
  • Required: true
  • Type: string

Any correct pattern.

options

See Options section.

escapePath(pattern)

Returns a path with escaped special characters (*?|(){}[], ! at the beginning of line, @+! before the opening parenthesis).

pattern
  • Required: true
  • Type: string

Any string, for example, a path to a file.

Options

Common options

concurrency

  • Type: number
  • Default: os.cpus().length

Specifies the maximum number of concurrent requests from a reader to read directories.

:book: The higher the number, the higher the performance and load on the file system. If you want to read in quiet mode, set the value to a comfortable number or 1.

cwd

  • Type: string
  • Default: process.cwd()

The current working directory in which to search.

deep

  • Type: number
  • Default: Infinity

Specifies the maximum depth of a read directory relative to the start directory.

For example, you have the following tree:

:book: If you specify a pattern with some base directory, this directory will not participate in the calculation of the depth of the found directories. Think of it as a cwd option.

  • Type: boolean
  • Default: true

Indicates whether to traverse descendants of symbolic link directories.

:book: If the stats option is specified, the information about the symbolic link (fs.lstat) will be replaced with information about the entry (fs.stat) behind it.

fs

  • Type: FileSystemAdapter
  • Default: fs.*

Custom implementation of methods for working with the file system.

ignore

  • Type: string[]
  • Default: []

An array of glob patterns to exclude matches. This is an alternative way to use negative patterns.

suppressErrors

  • Type: boolean
  • Default: false

By default this package suppress only ENOENT errors. Set to true to suppress any error.

:book: Can be useful when the directory has entries with a special level of access.

  • Type: boolean
  • Default: false

Throw an error when symbolic link is broken if true or safely return lstat call if false.

:book: This option has no effect on errors when reading the symbolic link directory.

Output control

absolute

  • Type: boolean
  • Default: false

Return the absolute path for entries.

:book: This option is required if you want to use negative patterns with absolute path, for example, !${__dirname}/*.js.

markDirectories

  • Type: boolean
  • Default: false

Mark the directory path with the final slash.

objectMode

  • Type: boolean
  • Default: false

Returns objects (instead of strings) describing entries.

The object has the following fields:

  • name (string) — the last part of the path (basename)
  • path (string) — full path relative to the pattern base directory
  • dirent (fs.Dirent) — instance of fs.Direct

:book: An object is an internal representation of entry, so getting it does not affect performance.

onlyDirectories

  • Type: boolean
  • Default: false

Return only directories.

:book: If true, the onlyFiles option is automatically false.

onlyFiles

  • Type: boolean
  • Default: true

Return only files.

stats

  • Type: boolean
  • Default: false

Enables an object mode with an additional field:

  • stats (fs.Stats) — instance of fs.Stats

:book: Returns fs.stat instead of fs.lstat for symbolic links when the followSymbolicLinks option is specified.

:warning: Unlike object mode this mode requires additional calls to the file system. On average, this mode is slower at least twice. See old and modern mode for more details.

unique

  • Type: boolean
  • Default: true

Ensures that the returned entries are unique.

If true and similar entries are found, the result is the first found.

Matching control

braceExpansion

  • Type: boolean
  • Default: true

Enables Bash-like brace expansion.

:1234: Syntax description or more detailed description.

caseSensitiveMatch

  • Type: boolean
  • Default: true

Enables a case-sensitive mode for matching files.

dot

  • Type: boolean
  • Default: false

Allow patterns to match entries that begin with a period (.).

:book: Note that an explicit dot in a portion of the pattern will always match dot files.

extglob

  • Type: boolean
  • Default: true

Enables Bash-like extglob functionality.

:1234: Syntax description.

globstar

  • Type: boolean
  • Default: true

Enables recursively repeats a pattern containing **. If false, ** behaves exactly like *.

baseNameMatch

  • Type: boolean
  • Default: false

If set to true, then patterns without slashes will be matched against the basename of the path if it contains slashes.

FAQ

What is a static or dynamic pattern?

All patterns can be divided into two types:

  • static. A pattern is considered static if it can be used to get an entry on the file system without using matching mechanisms. For example, the file.js pattern is a static pattern because we can just verify that it exists on the file system.
  • dynamic. A pattern is considered dynamic if it cannot be used directly to find occurrences without using a matching mechanisms. For example, the * pattern is a dynamic pattern because we cannot use this pattern directly.

A pattern is considered dynamic if it contains the following characters ( — any characters or their absence) or options:

  • The caseSensitiveMatch option is disabled
  • \\ (the escape character)
  • *, ?, ! (at the beginning of line)
  • […]
  • (…|…)
  • @(…), !(…), *(…), ?(…), +(…) (respects the extglob option)
  • {…,…}, {…..…} (respects the braceExpansion option)

How to write patterns on Windows?

Always use forward-slashes in glob expressions (patterns and ignore option). Use backslashes for escaping characters. With the cwd option use a convenient format.

Bad

Good

:book: Use the normalize-path or the unixify package to convert Windows-style path to a Unix-style path.

Read more about matching with backslashes.

Why are parentheses match wrong?

Refers to Bash. You need to escape special characters:

Read more about matching special characters as literals.

How to exclude directory from reading?

You can use a negative pattern like this: !**/node_modules or !**/node_modules/**. Also you can use ignore option. Just look at the example below.

If you don’t want to read the second directory, you must write the following pattern: !**/second or !**/second/**.

:warning: When you write !**/second/**/* it means that the directory will be read, but all the entries will not be included in the results.

You have to understand that if you write the pattern to exclude directories, then the directory will not be read under any circumstances.

How to use UNC path?

You cannot use Uniform Naming Convention (UNC) paths as patterns (due to syntax), but you can use them as cwd directory.

Compatible with node-glob?

node-glob fast-glob
cwd cwd
root
dot dot
nomount
mark markDirectories
nosort
nounique unique
nobrace braceExpansion
noglobstar globstar
noext extglob
nocase caseSensitiveMatch
matchBase baseNameMatch
nodir onlyFiles
ignore ignore
follow followSymbolicLinks
realpath
absolute absolute

Benchmarks

Server

Link: Vultr Bare Metal

You can see results here for latest release.

Nettop

Link: Zotac bi323

You can see results here for latest release.

Changelog

See the Releases section of our GitHub project for changelog for each release version.



Source Map

Build Status

NPM

This is a library to generate and consume the source map format described here.

Use with Node

npm install source-map

Use on the Web


Table of Contents

Examples

Consuming a source map

Generating a source map

In depth guide: Compiling to JavaScript, and Debugging with Source Maps

With SourceNode (high level API)

With SourceMapGenerator (low level API)

API

Get a reference to the module:

SourceMapConsumer

A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.

new SourceMapConsumer(rawSourceMap)

The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:

  • version: Which version of the source map spec this map is following.

  • sources: An array of URLs to the original source files.

  • names: An array of identifiers which can be referenced by individual mappings.

  • sourceRoot: Optional. The URL root from which all sources are relative.

  • sourcesContent: Optional. An array of contents of the original source files.

  • mappings: A string of base64 VLQs which contain the actual mappings.

  • file: Optional. The generated filename this source map is associated with.

SourceMapConsumer.prototype.computeColumnSpans()

Compute the last column for each generated mapping. The last column is inclusive.

SourceMapConsumer.prototype.originalPositionFor(generatedPosition)

Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:

  • line: The line number in the generated source.

  • column: The column number in the generated source.

  • bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.

and an object is returned with the following properties:

  • source: The original source file, or null if this information is not available.

  • line: The line number in the original source, or null if this information is not available.

  • column: The column number in the original source, or null if this information is not available.

  • name: The original identifier, or null if this information is not available.

SourceMapConsumer.prototype.generatedPositionFor(originalPosition)

Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: The column number in the original source.

and an object is returned with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)

Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.

The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: Optional. The column number in the original source.

and an array of objects is returned, each with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.hasContentsOfAllSources()

Return true if we have the embedded source content for every source listed in the source map, false otherwise.

In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.

SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])

Returns the original source content for the source provided. The only argument is the URL of the original source file.

If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.

SourceMapConsumer.prototype.eachMapping(callback, context, order)

Iterate over each mapping between an original source/line/column and a generated line/column in this source map.

  • callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }

  • context: Optional. If specified, this object will be the value of this every time that callback is called.

  • order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.

SourceMapGenerator

An instance of the SourceMapGenerator represents a source map which is being built incrementally.

new SourceMapGenerator([startOfSourceMap])

You may pass an object with the following properties:

  • file: The filename of the generated source that this source map is associated with.

  • sourceRoot: A root for all relative URLs in this source map.

  • skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.

SourceMapGenerator.fromSourceMap(sourceMapConsumer)

Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.

  • sourceMapConsumer The SourceMap.

SourceMapGenerator.prototype.addMapping(mapping)

Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:

  • generated: An object with the generated line and column positions.

  • original: An object with the original line and column positions.

  • source: The original source file (relative to the sourceRoot).

  • name: An optional original token name for this mapping.

SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for an original source file.

  • sourceFile the URL of the original source file.

  • sourceContent the content of the source file.

SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])

Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.

  • sourceMapConsumer: The SourceMap to be applied.

  • sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.

  • sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.

    This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.

    If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)

SourceMapGenerator.prototype.toString()

Renders the source map being generated to a string.

SourceNode

SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.

new SourceNode([line, column, source[, chunk[, name]]])

  • line: The original line number associated with this source node, or null if it isn’t associated with an original line.

  • column: The original column number associated with this source node, or null if it isn’t associated with an original column.

  • source: The original source’s filename; null if no filename is provided.

  • chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.

  • name: Optional. The original identifier.

SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])

Creates a SourceNode from generated code and a SourceMapConsumer.

  • code: The generated code

  • sourceMapConsumer The SourceMap for the generated code

  • relativePath The optional path that relative sources in sourceMapConsumer should be relative to.

SourceNode.prototype.add(chunk)

Add a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.prepend(chunk)

Prepend a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.

  • sourceFile: The filename of the source file

  • sourceContent: The content of the source file

SourceNode.prototype.walk(fn)

Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.

  • fn: The traversal function.

SourceNode.prototype.walkSourceContents(fn)

Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.

  • fn: The traversal function.

SourceNode.prototype.join(sep)

Like Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.

  • sep: The separator.

SourceNode.prototype.replaceRight(pattern, replacement)

Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.

  • pattern: The pattern to replace.

  • replacement: The thing to replace the pattern with.

SourceNode.prototype.toString()

Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.

SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])

Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.

The arguments are the same as those to new SourceMapGenerator.



Source Map

Build Status

NPM

This is a library to generate and consume the source map format described here.

Use with Node

npm install source-map

Use on the Web


Table of Contents

Examples

Consuming a source map

Generating a source map

In depth guide: Compiling to JavaScript, and Debugging with Source Maps

With SourceNode (high level API)

With SourceMapGenerator (low level API)

API

Get a reference to the module:

SourceMapConsumer

A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.

new SourceMapConsumer(rawSourceMap)

The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:

  • version: Which version of the source map spec this map is following.

  • sources: An array of URLs to the original source files.

  • names: An array of identifiers which can be referenced by individual mappings.

  • sourceRoot: Optional. The URL root from which all sources are relative.

  • sourcesContent: Optional. An array of contents of the original source files.

  • mappings: A string of base64 VLQs which contain the actual mappings.

  • file: Optional. The generated filename this source map is associated with.

SourceMapConsumer.prototype.computeColumnSpans()

Compute the last column for each generated mapping. The last column is inclusive.

SourceMapConsumer.prototype.originalPositionFor(generatedPosition)

Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:

  • line: The line number in the generated source.

  • column: The column number in the generated source.

  • bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.

and an object is returned with the following properties:

  • source: The original source file, or null if this information is not available.

  • line: The line number in the original source, or null if this information is not available.

  • column: The column number in the original source, or null if this information is not available.

  • name: The original identifier, or null if this information is not available.

SourceMapConsumer.prototype.generatedPositionFor(originalPosition)

Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: The column number in the original source.

and an object is returned with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)

Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.

The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source.

  • column: Optional. The column number in the original source.

and an array of objects is returned, each with the following properties:

  • line: The line number in the generated source, or null.

  • column: The column number in the generated source, or null.

SourceMapConsumer.prototype.hasContentsOfAllSources()

Return true if we have the embedded source content for every source listed in the source map, false otherwise.

In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.

SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])

Returns the original source content for the source provided. The only argument is the URL of the original source file.

If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.

SourceMapConsumer.prototype.eachMapping(callback, context, order)

Iterate over each mapping between an original source/line/column and a generated line/column in this source map.

  • callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }

  • context: Optional. If specified, this object will be the value of this every time that callback is called.

  • order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.

SourceMapGenerator

An instance of the SourceMapGenerator represents a source map which is being built incrementally.

new SourceMapGenerator([startOfSourceMap])

You may pass an object with the following properties:

  • file: The filename of the generated source that this source map is associated with.

  • sourceRoot: A root for all relative URLs in this source map.

  • skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.

SourceMapGenerator.fromSourceMap(sourceMapConsumer)

Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.

  • sourceMapConsumer The SourceMap.

SourceMapGenerator.prototype.addMapping(mapping)

Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:

  • generated: An object with the generated line and column positions.

  • original: An object with the original line and column positions.

  • source: The original source file (relative to the sourceRoot).

  • name: An optional original token name for this mapping.

SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for an original source file.

  • sourceFile the URL of the original source file.

  • sourceContent the content of the source file.

SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])

Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.

  • sourceMapConsumer: The SourceMap to be applied.

  • sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.

  • sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.

    This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.

    If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)

SourceMapGenerator.prototype.toString()

Renders the source map being generated to a string.

SourceNode

SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.

new SourceNode([line, column, source[, chunk[, name]]])

  • line: The original line number associated with this source node, or null if it isn’t associated with an original line.

  • column: The original column number associated with this source node, or null if it isn’t associated with an original column.

  • source: The original source’s filename; null if no filename is provided.

  • chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.

  • name: Optional. The original identifier.

SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])

Creates a SourceNode from generated code and a SourceMapConsumer.

  • code: The generated code

  • sourceMapConsumer The SourceMap for the generated code

  • relativePath The optional path that relative sources in sourceMapConsumer should be relative to.

SourceNode.prototype.add(chunk)

Add a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.prepend(chunk)

Prepend a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.

  • sourceFile: The filename of the source file

  • sourceContent: The content of the source file

SourceNode.prototype.walk(fn)

Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.

  • fn: The traversal function.

SourceNode.prototype.walkSourceContents(fn)

Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.

  • fn: The traversal function.

SourceNode.prototype.join(sep)

Like Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.

  • sep: The separator.

SourceNode.prototype.replaceRight(pattern, replacement)

Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.

  • pattern: The pattern to replace.

  • replacement: The thing to replace the pattern with.

SourceNode.prototype.toString()

Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.

SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])

Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.

The arguments are the same as those to new SourceMapGenerator.



braces NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Install

Install with npm:

Why use braces?

Brace patterns are great for matching ranges. Users (and implementors) shouldn’t have to think about whether or not they will break their application (or yours) from accidentally defining an aggressive brace pattern. Braces is the only library that offers a solution to this problem.

Usage

The main export is a function that takes one or more brace patterns and options.

By default, braces returns an optimized regex-source string. To get an array of brace patterns, use brace.expand().

The following section explains the difference in more detail. (If you’re curious about “why” braces does this by default, see brace matching pitfalls.

Optimized vs. expanded braces

Optimized

By default, patterns are optimized for regex and matching:

Expanded

To expand patterns the same way as Bash or minimatch, use the .expand method:

Or use options.expand:

Features

Lists

Uses fill-range for expanding alphabetical or numeric lists:

Sequences

Uses fill-range for expanding alphabetical or numeric ranges (bash “sequences”):

Steps

Steps, or increments, may be used with ranges:

When the .optimize method is used, or options.optimize is set to true, sequences are passed to to-regex-range for expansion.

Nesting

Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved.

“Expanded” braces

“Optimized” braces

Escaping

Escaping braces

A brace pattern will not be expanded or evaluted if either the opening or closing brace is escaped:

Escaping commas

Commas inside braces may also be escaped:

Single items

Following bash conventions, a brace pattern is also not expanded when it contains a single character:

Options

options.maxLength

Type: Number

Default: 65,536

Description: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera.

options.expand

Type: Boolean

Default: undefined

Description: Generate an “expanded” brace pattern (this option is unncessary with the .expand method, which does the same thing).

options.optimize

Type: Boolean

Default: true

Description: Enabled by default.

options.nodupes

Type: Boolean

Default: true

Description: Duplicates are removed by default. To keep duplicates, pass {nodupes: false} on the options

options.rangeLimit

Type: Number

Default: 250

Description: When braces.expand() is used, or options.expand is true, brace patterns will automatically be optimized when the difference between the range minimum and range maximum exceeds the rangeLimit. This is to prevent huge ranges from freezing your application.

You can set this to any number, or change options.rangeLimit to Inifinity to disable this altogether.

Examples

options.transform

Type: Function

Default: undefined

Description: Customize range expansion.

options.quantifiers

Type: Boolean

Default: undefined

Description: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, a{1,3} will match the letter a one to three times.

Unfortunately, regex quantifiers happen to share the same syntax as Bash lists

The quantifiers option tells braces to detect when regex quantifiers are defined in the given pattern, and not to try to expand them as lists.

Examples

options.unescape

Type: Boolean

Default: undefined

Description: Strip backslashes that were used for escaping from the result.

What is “brace expansion”?

Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs).

In addition to “expansion”, braces are also used for matching. In other words:

More about brace expansion (click to expand)

There are two main types of brace expansion:

  1. lists: which are defined using comma-separated values inside curly braces: {a,b,c}
  2. sequences: which are defined using a starting value and an ending value, separated by two dots: a{1..3}b. Optionally, a third argument may be passed to define a “step” or increment to use: a{1..100..10}b. These are also sometimes referred to as “ranges”.

Here are some example brace patterns to illustrate how they work:

Sets

{a,b,c}       => a b c
{a,b,c}{1,2}  => a1 a2 b1 b2 c1 c2

Sequences

{1..9}        => 1 2 3 4 5 6 7 8 9
{4..-4}       => 4 3 2 1 0 -1 -2 -3 -4
{1..20..3}    => 1 4 7 10 13 16 19
{a..j}        => a b c d e f g h i j
{j..a}        => j i h g f e d c b a
{a..z..3}     => a d g j m p s v y

Combination

Sets and sequences can be mixed together or used along with any other strings.

{a,b,c}{1..3}   => a1 a2 a3 b1 b2 b3 c1 c2 c3
foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar

The fact that braces can be “expanded” from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases.

Brace matching

In addition to expansion, brace patterns are also useful for performing regular-expression-like matching.

For example, the pattern foo/{1..3}/bar would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar

But not:

baz/1/qux
baz/2/qux
baz/3/qux

Braces can also be combined with glob patterns to perform more advanced wildcard matching. For example, the pattern */{1..3}/* would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux

Brace matching pitfalls

Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of.

tldr

“brace bombs”

  • brace expansion can eat up a huge amount of processing resources
  • as brace patterns increase linearly in size, the system resources required to expand the pattern increase exponentially
  • users can accidentally (or intentially) exhaust your system’s resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!)

For a more detailed explanation with examples, see the geometric complexity section.

The solution

Jump to the performance section to see how Braces solves this problem in comparison to other libraries.

Geometric complexity

At minimum, brace patterns with sets limited to two elements have quadradic or O(n^2) complexity. But the complexity of the algorithm increases exponentially as the number of sets, and elements per set, increases, which is O(n^c).

For example, the following sets demonstrate quadratic (O(n^2)) complexity:

{1,2}{3,4}      => (2X2)    => 13 14 23 24
{1,2}{3,4}{5,6} => (2X2X2)  => 135 136 145 146 235 236 245 246

But add an element to a set, and we get a n-fold Cartesian product with O(n^c) complexity:

{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 
                                    249 257 258 259 267 268 269 347 348 349 357 
                                    358 359 367 368 369

Now, imagine how this complexity grows given that each element is a n-tuple:

{1..100}{1..100}         => (100X100)     => 10,000 elements (38.4 kB)
{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB)

Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control.

More information

Interested in learning more about brace expansion?

Performance

Braces is not only screaming fast, it’s also more accurate the other brace expansion libraries.

Better algorithms

Fortunately there is a solution to the “brace bomb” problem: don’t expand brace patterns into an array when they’re used for matching.

Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently.

The proof is in the numbers

Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using braces() and minimatch.braceExpand(), respectively.

Pattern braces minimatch
{1..9007199254740991}1 298 B (5ms 459μs) N/A (freezes)
{1..1000000000000000} 41 B (1ms 15μs) N/A (freezes)
{1..100000000000000} 40 B (890μs) N/A (freezes)
{1..10000000000000} 39 B (2ms 49μs) N/A (freezes)
{1..1000000000000} 38 B (608μs) N/A (freezes)
{1..100000000000} 37 B (397μs) N/A (freezes)
{1..10000000000} 35 B (983μs) N/A (freezes)
{1..1000000000} 34 B (798μs) N/A (freezes)
{1..100000000} 33 B (733μs) N/A (freezes)
{1..10000000} 32 B (5ms 632μs) 78.89 MB (16s 388ms 569μs)
{1..1000000} 31 B (1ms 381μs) 6.89 MB (1s 496ms 887μs)
{1..100000} 30 B (950μs) 588.89 kB (146ms 921μs)
{1..10000} 29 B (1ms 114μs) 48.89 kB (14ms 187μs)
{1..1000} 28 B (760μs) 3.89 kB (1ms 453μs)
{1..100} 22 B (345μs) 291 B (196μs)
{1..10} 10 B (533μs) 20 B (37μs)
{1..3} 7 B (190μs) 5 B (27μs)

Faster algorithms

When you need expansion, braces is still much faster.

(the following results were generated using braces.expand() and minimatch.braceExpand(), respectively)

Pattern braces minimatch
{1..10000000} 78.89 MB (2s 698ms 642μs) 78.89 MB (18s 601ms 974μs)
{1..1000000} 6.89 MB (458ms 576μs) 6.89 MB (1s 491ms 621μs)
{1..100000} 588.89 kB (20ms 728μs) 588.89 kB (156ms 919μs)
{1..10000} 48.89 kB (2ms 202μs) 48.89 kB (13ms 641μs)
{1..1000} 3.89 kB (1ms 796μs) 3.89 kB (1ms 958μs)
{1..100} 291 B (424μs) 291 B (211μs)
{1..10} 20 B (487μs) 20 B (72μs)
{1..3} 5 B (166μs) 5 B (27μs)

If you’d like to run these comparisons yourself, see test/support/generate.js.

Benchmarks

Running benchmarks

Install dev dependencies:

Latest results

Benchmarking: (8 of 8)
 · combination-nested
 · combination
 · escaped
 · list-basic
 · list-multiple
 · no-braces
 · sequence-basic
 · sequence-multiple

# benchmark/fixtures/combination-nested.js (52 bytes)
  brace-expansion x 4,756 ops/sec ±1.09% (86 runs sampled)
  braces x 11,202,303 ops/sec ±1.06% (88 runs sampled)
  minimatch x 4,816 ops/sec ±0.99% (87 runs sampled)

  fastest is braces

# benchmark/fixtures/combination.js (51 bytes)
  brace-expansion x 625 ops/sec ±0.87% (87 runs sampled)
  braces x 11,031,884 ops/sec ±0.72% (90 runs sampled)
  minimatch x 637 ops/sec ±0.84% (88 runs sampled)

  fastest is braces

# benchmark/fixtures/escaped.js (44 bytes)
  brace-expansion x 163,325 ops/sec ±1.05% (87 runs sampled)
  braces x 10,655,071 ops/sec ±1.22% (88 runs sampled)
  minimatch x 147,495 ops/sec ±0.96% (88 runs sampled)

  fastest is braces

# benchmark/fixtures/list-basic.js (40 bytes)
  brace-expansion x 99,726 ops/sec ±1.07% (83 runs sampled)
  braces x 10,596,584 ops/sec ±0.98% (88 runs sampled)
  minimatch x 100,069 ops/sec ±1.17% (86 runs sampled)

  fastest is braces

# benchmark/fixtures/list-multiple.js (52 bytes)
  brace-expansion x 34,348 ops/sec ±1.08% (88 runs sampled)
  braces x 9,264,131 ops/sec ±1.12% (88 runs sampled)
  minimatch x 34,893 ops/sec ±0.87% (87 runs sampled)

  fastest is braces

# benchmark/fixtures/no-braces.js (48 bytes)
  brace-expansion x 275,368 ops/sec ±1.18% (89 runs sampled)
  braces x 9,134,677 ops/sec ±0.95% (88 runs sampled)
  minimatch x 3,755,954 ops/sec ±1.13% (89 runs sampled)

  fastest is braces

# benchmark/fixtures/sequence-basic.js (41 bytes)
  brace-expansion x 5,492 ops/sec ±1.35% (87 runs sampled)
  braces x 8,485,034 ops/sec ±1.28% (89 runs sampled)
  minimatch x 5,341 ops/sec ±1.17% (87 runs sampled)

  fastest is braces

# benchmark/fixtures/sequence-multiple.js (51 bytes)
  brace-expansion x 116 ops/sec ±0.77% (77 runs sampled)
  braces x 9,445,118 ops/sec ±1.32% (84 runs sampled)
  minimatch x 109 ops/sec ±1.16% (76 runs sampled)

  fastest is braces

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • expand-brackets: Expand POSIX bracket expressions (character classes) in glob patterns. | homepage
  • extglob: Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • micromatch: Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | homepage
  • nanomatch: Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash… more | homepage
Commits Contributor
188 jonschlinkert
4 doowb
1 es128
1 eush77
1 hemanth

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 17, 2018.


  1. this is the largest safe integer allowed in JavaScript.



date-and-time

Circle CI

This library is a minimalist collection of functions for manipulating JS date and time. It’s tiny, simple, easy to learn.

Why

JS modules nowadays are getting more huge and complex, and there are also many dependencies. Trying to keep each module simple and small is meaningful.

Features

  • Minimalist. Approximately 2k. (minified and gzipped)
  • Extensible. Plugin system support.
  • Multi language support.
  • Universal / Isomorphic. Works wherever.
  • Older browser support. Even works on IE6. :)

Install

  • via npm:
npm install date-and-time --save
  • local:

Recent Changes

Usage

  • Node.js:
  • With a transpiler:
  • The browser:

API

format(dateObj, formatString[, utc])

  • Formatting a date.
    • @param {Date} dateObj - a Date object
    • @param {string|Array.<string>} arg - a format string or a compiled object
    • @param {boolean} [utc] - output as UTC
    • @returns {string} a formatted string

Available tokens and their meanings are as follows:

token meaning examples of output
YYYY four-digit year 0999, 2015
YY two-digit year 99, 01, 15
Y four-digit year without zero-padding 2, 44, 888, 2015
MMMM month name (long) January, December
MMM month name (short) Jan, Dec
MM month with zero-padding 01, 12
M month 1, 12
DD date with zero-padding 02, 31
D date 2, 31
dddd day of week (long) Friday, Sunday
ddd day of week (short) Fri, Sun
dd day of week (very short) Fr, Su
HH 24-hour with zero-padding 23, 08
H 24-hour 23, 8
hh 12-hour with zero-padding 11, 08
h 12-hour 11, 8
A meridiem (uppercase) AM, PM
mm minute with zero-padding 14, 07
m minute 14, 7
ss second with zero-padding 05, 10
s second 5, 10
SSS millisecond (high accuracy) 753, 022
SS millisecond (middle accuracy) 75, 02
S millisecond (low accuracy) 7, 0
Z timezone offset +0100, -0800

You can also use the following tokens by importing plugins. See PLUGINS.md for details.

token meaning examples of output
DDD ordinal notation of date 1st, 2nd, 3rd
AA meridiem (uppercase with ellipsis) A.M., P.M.
a meridiem (lowercase) am, pm
aa meridiem (lowercase with ellipsis) a.m., p.m.

NOTE 1. Comments

String in parenthese [...] in the formatString will be ignored as comments:

NOTE 2. Output as UTC

This function usually outputs a local date-time string. Set to true the utc option (the 3rd parameter) if you would like to get a UTC date-time string.

NOTE 3. More Tokens

You can also define your own tokens. See EXTEND.md for details.

parse(dateString, arg[, utc])

  • Parsing a date string.
    • @param {string} dateString - a date string
    • @param {string|Array.<string>} arg - a format string or a compiled object
    • @param {boolean} [utc] - input as UTC
    • @returns {Date} a constructed date

Available tokens and their meanings are as follows:

token meaning examples of acceptable form
YYYY four-digit year 0999, 2015
Y four-digit year without zero-padding 2, 44, 88, 2015
MMMM month name (long) January, December
MMM month name (short) Jan, Dec
MM month with zero-padding 01, 12
M month 1, 12
DD date with zero-padding 02, 31
D date 2, 31
HH 24-hour with zero-padding 23, 08
H 24-hour 23, 8
hh 12-hour with zero-padding 11, 08
h 12-hour 11, 8
A meridiem (uppercase) AM, PM
mm minute with zero-padding 14, 07
m minute 14, 7
ss second with zero-padding 05, 10
s second 5, 10
SSS millisecond (high accuracy) 753, 022
SS millisecond (middle accuracy) 75, 02
S millisecond (low accuracy) 7, 0
Z timezone offset +0100, -0800

You can also use the following tokens by importing plugins. See PLUGINS.md for details.

token meaning examples of acceptable form
YY two-digit year 90, 00, 08, 19
Y two-digit year without zero-padding 90, 0, 8, 19
A meridiem AM, PM, A.M., P.M., am, pm, a.m., p.m.
dddd day of week (long) Friday, Sunday
ddd day of week (short) Fri, Sun
dd day of week (very short) Fr, Su
SSSSSS microsecond (high accuracy) 123456, 000001
SSSSS microsecond (middle accuracy) 12345, 00001
SSSS microsecond (low accuracy) 1234, 0001

NOTE 1. Invalid Date

If the function fails to parse, it will return Invalid Date. Notice that the Invalid Date is a Date object, not NaN or null. You can tell whether the Date object is invalid as follows:

NOTE 2. Input as UTC

This function usually assumes the dateString is a local date-time. Set to true the utc option (the 3rd parameter) if it is a UTC date-time.

NOTE 3. Default Date Time

Default date is January 1, 1970, time is 00:00:00.000. Values not passed will be complemented with them:

NOTE 4. Max Date / Min Date

Parsable maximum date is December 31, 9999, minimum date is January 1, 0001.

NOTE 5. 12-hour notation and Meridiem

If use hh or h (12-hour) token, use together A (meridiem) token to get the right value.

NOTE 6. Token disablement

Use square brackets [] if a date-time string includes some token characters. Tokens inside square brackets in the formatString will be interpreted as normal characters:

NOTE 7. Wildcard

A white space works as a wildcard token. This token is not interpret into anything. This means it can be ignored a specific variable string. For example, when you would like to ignore a time part from a date string, you can write as follows:

NOTE 8. Ellipsis

The parser supports ... (ellipse) token. The above example can also be written like this:

compile(formatString)

  • Compiling a format string for the parser.
    • @param {string} formatString - a format string
    • @returns {Array.<string>} a compiled object

If you are going to call the format(), the parse() or the isValid() many times with one string format, recommended to precompile and reuse it for performance.

preparse(dateString, arg)

  • Pre-parsing a date string.
    • @param {string} dateString - a date string
    • @param {string|Array.<string>} arg - a format string or a compiled object
    • @returns {Object} a date structure

This function takes exactly the same parameters with the parse(), but returns a date structure as follows unlike that:

This date structure provides a parsing result. You will be able to tell from it how the date string was parsed(, or why the parsing was failed).

isValid(arg1[, arg2])

  • Validation.
    • @param {Object|string} arg1 - a date structure or a date string
    • @param {string|Array.<string>} [arg2] - a format string or a compiled object
    • @returns {boolean} whether the date string is a valid date

This function takes either exactly the same parameters with the parse() or a date structure which the preparse() returns, evaluates the validity of them.

transform(dateString, arg1, arg2[, utc])

  • Transformation of date string.
    • @param {string} dateString - a date string
    • @param {string|Array.<string>} arg1 - the format string of the date string or the compiled object
    • @param {string|Array.<string>} arg2 - the transformed format string or the compiled object
    • @param {boolean} [utc] - output as UTC
    • @returns {string} a formatted string

This function transforms the format of a date string. The 2nd parameter, arg1, is the format string of it. Available token list is equal to the parse()’s. The 3rd parameter, arg2, is the transformed format string. Available token list is equal to the format()’s.

addYears(dateObj, years)

  • Adding years.
    • @param {Date} dateObj - a Date object
    • @param {number} years - number of years to add
    • @returns {Date} a date after adding the value

addMonths(dateObj, months)

  • Adding months.
    • @param {Date} dateObj - a Date object
    • @param {number} months - number of months to add
    • @returns {Date} a date after adding the value

addDays(dateObj, days)

  • Adding days.
    • @param {Date} dateObj - a Date object
    • @param {number} days - number of days to add
    • @returns {Date} a date after adding the value

addHours(dateObj, hours)

  • Adding hours.
    • @param {Date} dateObj - a Date object
    • @param {number} hours - number of hours to add
    • @returns {Date} a date after adding the value

addMinutes(dateObj, minutes)

  • Adding minutes.
    • @param {Date} dateObj - a Date object
    • @param {number} minutes - number of minutes to add
    • @returns {Date} a date after adding the value

addSeconds(dateObj, seconds)

  • Adding seconds.
    • @param {Date} dateObj - a Date object
    • @param {number} seconds - number of seconds to add
    • @returns {Date} a date after adding the value

addMilliseconds(dateObj, milliseconds)

  • Adding milliseconds.
    • @param {Date} dateObj - a Date object
    • @param {number} milliseconds - number of milliseconds to add
    • @returns {Date} a date after adding the value

subtract(date1, date2)

  • Subtracting.
    • @param {Date} date1 - a Date object
    • @param {Date} date2 - a Date object
    • @returns {Object} a result object subtracting date2 from date1

isLeapYear(y)

  • Leap year.
    • @param {number} y - year
    • @returns {boolean} whether the year is a leap year

isSameDay(date1, date2)

  • Comparison of two dates.
    • @param {Date} date1 - a Date object
    • @param {Date} date2 - a Date object
    • @returns {boolean} whether the dates are the same day (times are ignored)

locale([code[, locale]])

  • Change locale or setting a new locale definition.
    • @param {string} code - language code
    • @param {Object} [locale] - locale definition
    • @returns {string} current language code

It returns a current language code if called without any parameters.

To switch to any other language, call it with a language code.

See LOCALE.md for details.

extend(extension)

  • Locale extension.
    • @param {Object} extension - locale definition
    • @returns {void}

Extend a current locale. See EXTEND.md for details.

plugin(name[, extension])

  • Plugin import or definition.
    • @param {string} name - plugin name
    • @param {Object} extension - locale definition
    • @returns {void}

Plugin is a named locale definition defined with the extend(). See PLUGINS.md for details.

Chrome, Firefox, Safari, Edge, and Internet Explorer 6+.



Source Map

Build Status

NPM

This is a library to generate and consume the source map format described here.

Use with Node

npm install source-map

Use on the Web

<script src="https://raw.githubusercontent.com/mozilla/source-map/master/dist/source-map.min.js" defer></script>

Table of Contents

Examples

Consuming a source map

Generating a source map

In depth guide: Compiling to JavaScript, and Debugging with Source Maps

With SourceNode (high level API)

With SourceMapGenerator (low level API)

API

Get a reference to the module:

SourceMapConsumer

A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.

new SourceMapConsumer(rawSourceMap)

The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:

  • version: Which version of the source map spec this map is following.

  • sources: An array of URLs to the original source files.

  • names: An array of identifiers which can be referenced by individual mappings.

  • sourceRoot: Optional. The URL root from which all sources are relative.

  • sourcesContent: Optional. An array of contents of the original source files.

  • mappings: A string of base64 VLQs which contain the actual mappings.

  • file: Optional. The generated filename this source map is associated with.

SourceMapConsumer.prototype.computeColumnSpans()

Compute the last column for each generated mapping. The last column is inclusive.

SourceMapConsumer.prototype.originalPositionFor(generatedPosition)

Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:

  • line: The line number in the generated source. Line numbers in this library are 1-based (note that the underlying source map specification uses 0-based line numbers – this library handles the translation).

  • column: The column number in the generated source. Column numbers in this library are 0-based.

  • bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.

and an object is returned with the following properties:

  • source: The original source file, or null if this information is not available.

  • line: The line number in the original source, or null if this information is not available. The line number is 1-based.

  • column: The column number in the original source, or null if this information is not available. The column number is 0-based.

  • name: The original identifier, or null if this information is not available.

SourceMapConsumer.prototype.generatedPositionFor(originalPosition)

Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source. The line number is 1-based.

  • column: The column number in the original source. The column number is 0-based.

and an object is returned with the following properties:

  • line: The line number in the generated source, or null. The line number is 1-based.

  • column: The column number in the generated source, or null. The column number is 0-based.

SourceMapConsumer.prototype.allGeneratedPositionsFor(originalPosition)

Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.

The only argument is an object with the following properties:

  • source: The filename of the original source.

  • line: The line number in the original source. The line number is 1-based.

  • column: Optional. The column number in the original source. The column number is 0-based.

and an array of objects is returned, each with the following properties:

  • line: The line number in the generated source, or null. The line number is 1-based.

  • column: The column number in the generated source, or null. The column number is 0-based.

SourceMapConsumer.prototype.hasContentsOfAllSources()

Return true if we have the embedded source content for every source listed in the source map, false otherwise.

In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.

SourceMapConsumer.prototype.sourceContentFor(source[, returnNullOnMissing])

Returns the original source content for the source provided. The only argument is the URL of the original source file.

If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.

SourceMapConsumer.prototype.eachMapping(callback, context, order)

Iterate over each mapping between an original source/line/column and a generated line/column in this source map.

  • callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }

  • context: Optional. If specified, this object will be the value of this every time that callback is called.

  • order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.

SourceMapGenerator

An instance of the SourceMapGenerator represents a source map which is being built incrementally.

new SourceMapGenerator([startOfSourceMap])

You may pass an object with the following properties:

  • file: The filename of the generated source that this source map is associated with.

  • sourceRoot: A root for all relative URLs in this source map.

  • skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.

SourceMapGenerator.fromSourceMap(sourceMapConsumer)

Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.

  • sourceMapConsumer The SourceMap.

SourceMapGenerator.prototype.addMapping(mapping)

Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:

  • generated: An object with the generated line and column positions.

  • original: An object with the original line and column positions.

  • source: The original source file (relative to the sourceRoot).

  • name: An optional original token name for this mapping.

SourceMapGenerator.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for an original source file.

  • sourceFile the URL of the original source file.

  • sourceContent the content of the source file.

SourceMapGenerator.prototype.applySourceMap(sourceMapConsumer[, sourceFile[, sourceMapPath]])

Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.

  • sourceMapConsumer: The SourceMap to be applied.

  • sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.

  • sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.

    This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.

    If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)

SourceMapGenerator.prototype.toString()

Renders the source map being generated to a string.

SourceNode

SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.

new SourceNode([line, column, source[, chunk[, name]]])

  • line: The original line number associated with this source node, or null if it isn’t associated with an original line. The line number is 1-based.

  • column: The original column number associated with this source node, or null if it isn’t associated with an original column. The column number is 0-based.

  • source: The original source’s filename; null if no filename is provided.

  • chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.

  • name: Optional. The original identifier.

SourceNode.fromStringWithSourceMap(code, sourceMapConsumer[, relativePath])

Creates a SourceNode from generated code and a SourceMapConsumer.

  • code: The generated code

  • sourceMapConsumer The SourceMap for the generated code

  • relativePath The optional path that relative sources in sourceMapConsumer should be relative to.

SourceNode.prototype.add(chunk)

Add a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.prepend(chunk)

Prepend a chunk of generated JS to this source node.

  • chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.

SourceNode.prototype.setSourceContent(sourceFile, sourceContent)

Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.

  • sourceFile: The filename of the source file

  • sourceContent: The content of the source file

SourceNode.prototype.walk(fn)

Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.

  • fn: The traversal function.

SourceNode.prototype.walkSourceContents(fn)

Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.

  • fn: The traversal function.

SourceNode.prototype.join(sep)

Like Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.

  • sep: The separator.

SourceNode.prototype.replaceRight(pattern, replacement)

Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.

  • pattern: The pattern to replace.

  • replacement: The thing to replace the pattern with.

SourceNode.prototype.toString()

Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.

SourceNode.prototype.toStringWithSourceMap([startOfSourceMap])

Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.

The arguments are the same as those to new SourceMapGenerator.



sshpk

Parse, convert, fingerprint and use SSH keys (both public and private) in pure node – no ssh-keygen or other external dependencies.

This library has been extracted from node-http-signature (work by Mark Cavage and Dave Eddy) and node-ssh-fingerprint (work by Dave Eddy), with additions (including ECDSA support) by Alex Wilson.

Install

npm install sshpk

Examples

Example output:

type => rsa
size => 2048 bits
comment => foo@foo.com
fingerprint => SHA256:PYC9kPVC6J873CSIbfp0LwYeczP/W4ffObNCuDJ1u5w
old-style fingerprint => a0:c8:ad:6c:32:9a:32:fa:59:cc:a9:8c:0a:0d:6e:bd

More examples: converting between formats:

Signing and verifying:

Matching fingerprints with keys:

Usage

Public keys

parseKey(data[, format = 'auto'[, options]])

Parses a key from a given data format and returns a new Key object.

Parameters

  • data – Either a Buffer or String, containing the key
  • format – String name of format to use, valid options are:
    • auto: choose automatically from all below
    • pem: supports both PKCS#1 and PKCS#8
    • ssh: standard OpenSSH format,
    • pkcs1, pkcs8: variants of pem
    • rfc4253: raw OpenSSH wire format
    • openssh: new post-OpenSSH 6.5 internal format, produced by ssh-keygen -o
    • dnssec: .key file format output by dnssec-keygen etc
    • putty: the PuTTY .ppk file format (supports truncated variant without all the lines from Private-Lines: onwards)
  • options – Optional Object, extra options, with keys:
    • filename – Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messages
    • passphrase – Optional String, encryption passphrase used to decrypt an encrypted PEM file

Key.isKey(obj)

Returns true if the given object is a valid Key object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

Key#type

String, the type of key. Valid options are rsa, dsa, ecdsa.

Key#size

Integer, “size” of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use.

Key#comment

Optional string, a key comment used by some formats (eg the ssh format).

Key#curve

Only present if this.type === 'ecdsa', string containing the name of the named curve used with this key. Possible values include nistp256, nistp384 and nistp521.

Key#toBuffer([format = 'ssh'])

Convert the key into a given data format and return the serialized key as a Buffer.

Parameters

  • format – String name of format to use, for valid options see parseKey()

Key#toString([format = ssh])

Same as this.toBuffer(format).toString().

Key#fingerprint([algorithm = 'sha256'[, hashType = 'ssh']])

Creates a new Fingerprint object representing this Key’s fingerprint.

Parameters

  • algorithm – String name of hash algorithm to use, valid options are md5, sha1, sha256, sha384, sha512
  • hashType – String name of fingerprint hash type to use, valid options are ssh (the type of fingerprint used by OpenSSH, e.g. in ssh-keygen), spki (used by HPKP, some OpenSSL applications)

Key#createVerify([hashAlgorithm])

Creates a crypto.Verifier specialized to use this Key (and the correct public key algorithm to match it). The returned Verifier has the same API as a regular one, except that the verify() function takes only the target signature as an argument.

Parameters

  • hashAlgorithm – optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including sha1, sha256.

v.verify(signature[, format]) Parameters

  • signature – either a Signature object, or a Buffer or String
  • format – optional String, name of format to interpret given String with. Not valid if signature is a Signature or Buffer.

Key#createDiffieHellman()

Key#createDH()

Creates a Diffie-Hellman key exchange object initialized with this key and all necessary parameters. This has the same API as a crypto.DiffieHellman instance, except that functions take Key and PrivateKey objects as arguments, and return them where indicated for.

This is only valid for keys belonging to a cryptosystem that supports DHE or a close analogue (i.e. dsa, ecdsa and curve25519 keys). An attempt to call this function on other keys will yield an Error.

Private keys

parsePrivateKey(data[, format = 'auto'[, options]])

Parses a private key from a given data format and returns a new PrivateKey object.

Parameters

  • data – Either a Buffer or String, containing the key
  • format – String name of format to use, valid options are:
    • auto: choose automatically from all below
    • pem: supports both PKCS#1 and PKCS#8
    • ssh, openssh: new post-OpenSSH 6.5 internal format, produced by ssh-keygen -o
    • pkcs1, pkcs8: variants of pem
    • rfc4253: raw OpenSSH wire format
    • dnssec: .private format output by dnssec-keygen etc.
  • options – Optional Object, extra options, with keys:
    • filename – Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messages
    • passphrase – Optional String, encryption passphrase used to decrypt an encrypted PEM file

generatePrivateKey(type[, options])

Generates a new private key of a certain key type, from random data.

Parameters

  • type – String, type of key to generate. Currently supported are 'ecdsa' and 'ed25519'
  • options – optional Object, with keys:
    • curve – optional String, for 'ecdsa' keys, specifies the curve to use. If ECDSA is specified and this option is not given, defaults to using 'nistp256'.

PrivateKey.isPrivateKey(obj)

Returns true if the given object is a valid PrivateKey object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

PrivateKey#type

String, the type of key. Valid options are rsa, dsa, ecdsa.

PrivateKey#size

Integer, “size” of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use.

PrivateKey#curve

Only present if this.type === 'ecdsa', string containing the name of the named curve used with this key. Possible values include nistp256, nistp384 and nistp521.

PrivateKey#toBuffer([format = 'pkcs1'])

Convert the key into a given data format and return the serialized key as a Buffer.

Parameters

  • format – String name of format to use, valid options are listed under parsePrivateKey. Note that ED25519 keys default to openssh format instead (as they have no pkcs1 representation).

PrivateKey#toString([format = 'pkcs1'])

Same as this.toBuffer(format).toString().

PrivateKey#toPublic()

Extract just the public part of this private key, and return it as a Key object.

PrivateKey#fingerprint([algorithm = 'sha256'])

Same as this.toPublic().fingerprint().

PrivateKey#createVerify([hashAlgorithm])

Same as this.toPublic().createVerify().

PrivateKey#createSign([hashAlgorithm])

Creates a crypto.Sign specialized to use this PrivateKey (and the correct key algorithm to match it). The returned Signer has the same API as a regular one, except that the sign() function takes no arguments, and returns a Signature object.

Parameters

  • hashAlgorithm – optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including sha1, sha256.

v.sign() Parameters

  • none

PrivateKey#derive(newType)

Derives a related key of type newType from this key. Currently this is only supported to change between ed25519 and curve25519 keys which are stored with the same private key (but usually distinct public keys in order to avoid degenerate keys that lead to a weak Diffie-Hellman exchange).

Parameters

  • newType – String, type of key to derive, either ed25519 or curve25519

Fingerprints

parseFingerprint(fingerprint[, options])

Pre-parses a fingerprint, creating a Fingerprint object that can be used to quickly locate a key by using the Fingerprint#matches function.

Parameters

  • fingerprint – String, the fingerprint value, in any supported format
  • options – Optional Object, with properties:
    • algorithms – Array of strings, names of hash algorithms to limit support to. If fingerprint uses a hash algorithm not on this list, throws InvalidAlgorithmError.
    • hashType – String, the type of hash the fingerprint uses, either ssh or spki (normally auto-detected based on the format, but can be overridden)
    • type – String, the entity this fingerprint identifies, either key or certificate

Fingerprint.isFingerprint(obj)

Returns true if the given object is a valid Fingerprint object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

Fingerprint#toString([format])

Returns a fingerprint as a string, in the given format.

Parameters

  • format – Optional String, format to use, valid options are hex and base64. If this Fingerprint uses the md5 algorithm, the default format is hex. Otherwise, the default is base64.

Fingerprint#matches(keyOrCertificate)

Verifies whether or not this Fingerprint matches a given Key or Certificate. This function uses double-hashing to avoid leaking timing information. Returns a boolean.

Note that a Key-type Fingerprint will always return false if asked to match a Certificate and vice versa.

Parameters

  • keyOrCertificate – a Key object or Certificate object, the entity to match this fingerprint against

Signatures

parseSignature(signature, algorithm, format)

Parses a signature in a given format, creating a Signature object. Useful for converting between the SSH and ASN.1 (PKCS/OpenSSL) signature formats, and also returned as output from PrivateKey#createSign().sign().

A Signature object can also be passed to a verifier produced by Key#createVerify() and it will automatically be converted internally into the correct format for verification.

Parameters

  • signature – a Buffer (binary) or String (base64), data of the actual signature in the given format
  • algorithm – a String, name of the algorithm to be used, possible values are rsa, dsa, ecdsa
  • format – a String, either asn1 or ssh

Signature.isSignature(obj)

Returns true if the given object is a valid Signature object created by a version of sshpk compatible with this one.

Parameters

  • obj – Object to identify

Signature#toBuffer([format = 'asn1'])

Converts a Signature to the given format and returns it as a Buffer.

Parameters

  • format – a String, either asn1 or ssh

Signature#toString([format = 'asn1'])

Same as this.toBuffer(format).toString('base64').

Certificates

sshpk includes basic support for parsing certificates in X.509 (PEM) format and the OpenSSH certificate format. This feature is intended to be used mainly to access basic metadata about certificates, extract public keys from them, and also to generate simple self-signed certificates from an existing key.

Notably, there is no implementation of CA chain-of-trust verification, and only very minimal support for key usage restrictions. Please do the security world a favour, and DO NOT use this code for certificate verification in the traditional X.509 CA chain style.

parseCertificate(data, format)

Parameters

  • data – a Buffer or String
  • format – a String, format to use, one of 'openssh', 'pem' (X.509 in a PEM wrapper), or 'x509' (raw DER encoded)

createSelfSignedCertificate(subject, privateKey[, options])

Parameters

  • subject – an Identity, the subject of the certificate
  • privateKey – a PrivateKey, the key of the subject: will be used both to be placed in the certificate and also to sign it (since this is a self-signed certificate)
  • options – optional Object, with keys:
    • lifetime – optional Number, lifetime of the certificate from now in seconds
    • validFrom, validUntil – optional Dates, beginning and end of certificate validity period. If given lifetime will be ignored
    • serial – optional Buffer, the serial number of the certificate
    • purposes – optional Array of String, X.509 key usage restrictions

createCertificate(subject, key, issuer, issuerKey[, options])

Parameters

  • subject – an Identity, the subject of the certificate
  • key – a Key, the public key of the subject
  • issuer – an Identity, the issuer of the certificate who will sign it
  • issuerKey – a PrivateKey, the issuer’s private key for signing
  • options – optional Object, with keys:
    • lifetime – optional Number, lifetime of the certificate from now in seconds
    • validFrom, validUntil – optional Dates, beginning and end of certificate validity period. If given lifetime will be ignored
    • serial – optional Buffer, the serial number of the certificate
    • purposes – optional Array of String, X.509 key usage restrictions

Certificate#subjects

Array of Identity instances describing the subject of this certificate.

Certificate#issuer

The Identity of the Certificate’s issuer (signer).

Certificate#subjectKey

The public key of the subject of the certificate, as a Key instance.

Certificate#issuerKey

The public key of the signing issuer of this certificate, as a Key instance. May be undefined if the issuer’s key is unknown (e.g. on an X509 certificate).

Certificate#serial

The serial number of the certificate. As this is normally a 64-bit or wider integer, it is returned as a Buffer.

Certificate#purposes

Array of Strings indicating the X.509 key usage purposes that this certificate is valid for. The possible strings at the moment are:

  • 'signature' – key can be used for digital signatures
  • 'identity' – key can be used to attest about the identity of the signer (X.509 calls this nonRepudiation)
  • 'codeSigning' – key can be used to sign executable code
  • 'keyEncryption' – key can be used to encrypt other keys
  • 'encryption' – key can be used to encrypt data (only applies for RSA)
  • 'keyAgreement' – key can be used for key exchange protocols such as Diffie-Hellman
  • 'ca' – key can be used to sign other certificates (is a Certificate Authority)
  • 'crl' – key can be used to sign Certificate Revocation Lists (CRLs)

Certificate#getExtension(nameOrOid)

Retrieves information about a certificate extension, if present, or returns undefined if not. The string argument nameOrOid should be either the OID (for X509 extensions) or the name (for OpenSSH extensions) of the extension to retrieve.

The object returned will have the following properties:

  • format – String, set to either 'x509' or 'openssh'
  • name or oid – String, only one set based on value of format
  • data – Buffer, the raw data inside the extension

Certificate#getExtensions()

Returns an Array of all present certificate extensions, in the same manner and format as getExtension().

Certificate#isExpired([when])

Tests whether the Certificate is currently expired (i.e. the validFrom and validUntil dates specify a range of time that does not include the current time).

Parameters

  • when – optional Date, if specified, tests whether the Certificate was or will be expired at the specified time instead of now

Returns a Boolean.

Certificate#isSignedByKey(key)

Tests whether the Certificate was validly signed by the given (public) Key.

Parameters

  • key – a Key instance

Returns a Boolean.

Certificate#isSignedBy(certificate)

Tests whether this Certificate was validly signed by the subject of the given certificate. Also tests that the issuer Identity of this Certificate and the subject Identity of the other Certificate are equivalent.

Parameters

  • certificate – another Certificate instance

Returns a Boolean.

Certificate#fingerprint([hashAlgo])

Returns the X509-style fingerprint of the entire certificate (as a Fingerprint instance). This matches what a web-browser or similar would display as the certificate fingerprint and should not be confused with the fingerprint of the subject’s public key.

Parameters

  • hashAlgo – an optional String, any hash function name

Certificate#toBuffer([format])

Serializes the Certificate to a Buffer and returns it.

Parameters

  • format – an optional String, output format, one of 'openssh', 'pem' or 'x509'. Defaults to 'x509'.

Returns a Buffer.

Certificate#toString([format])

  • format – an optional String, output format, one of 'openssh', 'pem' or 'x509'. Defaults to 'pem'.

Returns a String.

Certificate identities

identityForHost(hostname)

Constructs a host-type Identity for a given hostname.

Parameters

  • hostname – the fully qualified DNS name of the host

Returns an Identity instance.

identityForUser(uid)

Constructs a user-type Identity for a given UID.

Parameters

  • uid – a String, user identifier (login name)

Returns an Identity instance.

identityForEmail(email)

Constructs an email-type Identity for a given email address.

Parameters

  • email – a String, email address

Returns an Identity instance.

identityFromDN(dn)

Parses an LDAP-style DN string (e.g. 'CN=foo, C=US') and turns it into an Identity instance.

Parameters

  • dn – a String

Returns an Identity instance.

identityFromArray(arr)

Constructs an Identity from an array of DN components (see Identity#toArray() for the format).

Parameters

  • arr – an Array of Objects, DN components with name and value

Returns an Identity instance.

Attribute name OID
cn 2.5.4.3
o 2.5.4.10
ou 2.5.4.11
l 2.5.4.7
s 2.5.4.8
c 2.5.4.6
sn 2.5.4.4
postalCode 2.5.4.17
serialNumber 2.5.4.5
street 2.5.4.9
x500UniqueIdentifier 2.5.4.45
role 2.5.4.72
telephoneNumber 2.5.4.20
description 2.5.4.13
dc 0.9.2342.19200300.100.1.25
uid 0.9.2342.19200300.100.1.1
mail 0.9.2342.19200300.100.1.3
title 2.5.4.12
gn 2.5.4.42
initials 2.5.4.43
pseudonym 2.5.4.65

Identity#toString()

Returns the identity as an LDAP-style DN string. e.g. 'CN=foo, O=bar corp, C=us'

Identity#type

The type of identity. One of 'host', 'user', 'email' or 'unknown'

Identity#hostname

Identity#uid

Identity#email

Set when type is 'host', 'user', or 'email', respectively. Strings.

Identity#cn

The value of the first CN= in the DN, if any. It’s probably better to use the #get() method instead of this property.

Identity#get(name[, asArray])

Returns the value of a named attribute in the Identity DN. If there is no attribute of the given name, returns undefined. If multiple components of the DN contain an attribute of this name, an exception is thrown unless the asArray argument is given as true – then they will be returned as an Array in the same order they appear in the DN.

Parameters

  • name – a String
  • asArray – an optional Boolean

Identity#toArray()

Returns the Identity as an Array of DN component objects. This looks like:

Each object has a name and a value property. The returned objects may be safely modified.

Errors

InvalidAlgorithmError

The specified algorithm is not valid, either because it is not supported, or because it was not included on a list of allowed algorithms.

Thrown by Fingerprint.parse, Key#fingerprint.

Properties

  • algorithm – the algorithm that could not be validated

FingerprintFormatError

The fingerprint string given could not be parsed as a supported fingerprint format, or the specified fingerprint format is invalid.

Thrown by Fingerprint.parse, Fingerprint#toString.

Properties

  • fingerprint – if caused by a fingerprint, the string value given
  • format – if caused by an invalid format specification, the string value given

KeyParseError

The key data given could not be parsed as a valid key.

Properties

  • keyNamefilename that was given to parseKey
  • format – the format that was trying to parse the key (see parseKey)
  • innerErr – the inner Error thrown by the format parser

KeyEncryptedError

The key is encrypted with a symmetric key (ie, it is password protected). The parsing operation would succeed if it was given the passphrase option.

Properties

  • keyNamefilename that was given to parseKey
  • format – the format that was trying to parse the key (currently can only be "pem")

CertificateParseError

The certificate data given could not be parsed as a valid certificate.

Properties

  • certNamefilename that was given to parseCertificate
  • format – the format that was trying to parse the key (see parseCertificate)
  • innerErr – the inner Error thrown by the format parser

Friends of sshpk

  • sshpk-agent is a library for speaking the ssh-agent protocol from node.js, which uses sshpk


Picomatch

version test status coverage status downloads



Blazing fast and accurate glob matcher written in JavaScript.
No dependencies and full support for standard and extended Bash glob features, including braces, extglobs, POSIX brackets, and regular expressions.



Why picomatch?

  • Lightweight - No dependencies
  • Minimal - Tiny API surface. Main export is a function that takes a glob pattern and returns a matcher function.
  • Fast - Loads in about 2ms (that’s several times faster than a single frame of a HD movie at 60fps)
  • Performant - Use the returned matcher function to speed up repeat matching (like when watching files)
  • Accurate matching - Using wildcards (* and ?), globstars (**) for nested directories, advanced globbing with extglobs, braces, and POSIX brackets, and support for escaping special characters with \ or quotes.
  • Well tested - Thousands of unit tests

See the library comparison to other libraries.



Table of Contents

Click to expand

(TOC generated by verb using markdown-toc)



Install

Install with npm:


Usage

The main export is a function that takes a glob pattern and an options object and returns a function for matching strings.


API

picomatch

Creates a matcher function from one or more glob patterns. The returned function takes a string to match as its first argument, and returns true if the string is a match. The returned matcher function also takes a boolean as the second argument that, when true, returns an object with additional information.

Params

  • globs {String|Array}: One or more glob patterns.
  • options {Object=}
  • returns {Function=}: Returns a matcher function.

Example

.test

Test input with the given regex. This is used by the main picomatch() function to test the input string.

Params

  • input {String}: String to test.
  • regex {RegExp}
  • returns {Object}: Returns an object with matching info.

Example

.matchBase

Match the basename of a filepath.

Params

  • input {String}: String to test.
  • glob {RegExp|String}: Glob pattern or regex created by .makeRe.
  • returns {Boolean}

Example

.isMatch

Returns true if any of the given glob patterns match the specified string.

Params

  • {String|Array}: str The string to test.
  • {String|Array}: patterns One or more glob patterns to use for matching.
  • {Object}: See available options.
  • returns {Boolean}: Returns true if any patterns match str

Example

.parse

Parse a glob pattern to create the source string for a regular expression.

Params

  • pattern {String}
  • options {Object}
  • returns {Object}: Returns an object with useful properties and output to be used as a regex source string.

Example

.scan

Scan a glob pattern to separate the pattern into segments.

Params

  • input {String}: Glob pattern to scan.
  • options {Object}
  • returns {Object}: Returns an object with

Example

.compileRe

Create a regular expression from a parsed glob pattern.

Params

  • state {String}: The object returned from the .parse method.
  • options {Object}
  • returns {RegExp}: Returns a regex created from the given pattern.

Example

.toRegex

Create a regular expression from the given regex source string.

Params

  • source {String}: Regular expression source string.
  • options {Object}
  • returns {RegExp}

Example


Options

Picomatch options

The following options may be used with the main picomatch() function or any of the methods on the picomatch API.

Option Type Default value Description
basename boolean false If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.
bash boolean false Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (**).
capture boolean undefined Return regex matches in supporting methods.
contains boolean undefined Allows glob to match any part of the given string(s).
cwd string process.cwd() Current working directory. Used by picomatch.split()
debug boolean undefined Debug regular expressions when an error is thrown.
dot boolean false Enable dotfile matching. By default, dotfiles are ignored unless a . is explicitly defined in the pattern, or options.dot is true
expandRange function undefined Custom function for expanding ranges in brace patterns, such as {a..z}. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It’s recommended that returned strings be wrapped in parentheses.
failglob boolean false Throws an error if no matches are found. Based on the bash option of the same name.
fastpaths boolean true To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to false.
flags boolean undefined Regex flags to use in the generated regex. If defined, the nocase option will be overridden.
format function undefined Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc.
ignore array\|string undefined One or more glob patterns for excluding strings that should not be matched from the result.
keepQuotes boolean false Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes.
literalBrackets boolean undefined When true, brackets in the glob pattern will be escaped so that only literal brackets will be matched.
lookbehinds boolean true
matchBase boolean false Alias for basename
maxLength boolean 65536 Limit the max length of the input string. An error is thrown if the input string is longer than this value.
nobrace boolean false Disable brace matching, so that {a,b} and {1..3} would be treated as literal characters.
nobracket boolean undefined Disable matching with regex brackets.
nocase boolean false Make matching case-insensitive. Equivalent to the regex i flag. Note that this option is overridden by the flags option.
nodupes boolean true Deprecated, use nounique instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false.
noext boolean false Alias for noextglob
noextglob boolean false Disable support for matching with extglobs (like +(a\|b))
noglobstar boolean false Disable support for matching nested directories with globstars (**)
nonegate boolean false Disable support for negating with leading !
noquantifiers boolean false Disable support for regex quantifiers (like a{1,2}) and treat them as brace patterns to be expanded.
onIgnore function undefined Function to be called on ignored items.
onMatch function undefined Function to be called on matched items.
onResult function undefined Function to be called on all items, regardless of whether or not they are matched or ignored.
posix boolean false
posixSlashes boolean undefined Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself
prepend boolean undefined String to prepend to the generated regex used for matching.
regex boolean false Use regular expression rules for + (instead of matching literal +), and for stars that follow closing parentheses or brackets (as in )* and ]*).
strictBrackets boolean undefined Throw an error if brackets, braces, or parens are imbalanced.
strictSlashes boolean undefined When true, picomatch won’t match trailing slashes with single stars.
unescape boolean undefined Remove backslashes preceding escaped characters in the glob pattern. By default, backslashes are retained.
unixify boolean undefined Alias for posixSlashes, for backwards compatibility.

Scan Options

In addition to the main picomatch options, the following options may also be used with the .scan method.

Option Type Default value Description
tokens boolean false When true, the returned object will include an array of tokens (objects), representing each path “segment” in the scanned glob pattern
parts boolean false When true, the returned object will include an array of strings representing each path “segment” in the scanned glob pattern. This is automatically enabled when options.tokens is true

Example


Options Examples

options.expandRange

Type: function

Default: undefined

Custom function for expanding ranges in brace patterns. The fill-range library is ideal for this purpose, or you can use custom code to do whatever you need.

Example

The following example shows how to create a glob that matches a folder

options.format

Type: function

Default: undefined

Custom function for formatting strings before they’re matched.

Example

options.onMatch

options.onIgnore

options.onResult



Globbing features

Basic globbing

Character Description
* Matches any character zero or more times, excluding path separators. Does not match path separators or hidden files or directories (“dotfiles”), unless explicitly enabled by setting the dot option to true.
** Matches any character zero or more times, including path separators. Note that ** will only match path separators (/, and \\ on Windows) when they are the only characters in a path segment. Thus, foo**/bar is equivalent to foo*/bar, and foo/a**b/bar is equivalent to foo/a*b/bar, and more than two consecutive stars in a glob path segment are regarded as a single star. Thus, foo/***/bar is equivalent to foo/*/bar.
? Matches any character excluding path separators one time. Does not match path separators or leading dots.
[abc] Matches any characters inside the brackets. For example, [abc] would match the characters a, b or c, and nothing else.

Matching behavior vs. Bash

Picomatch’s matching features and expected results in unit tests are based on Bash’s unit tests and the Bash 4.3 specification, with the following exceptions:

  • Bash will match foo/bar/baz with *. Picomatch only matches nested directories with **.
  • Bash greedily matches with negated extglobs. For example, Bash 4.3 says that !(foo)* should match foo and foobar, since the trailing * bracktracks to match the preceding pattern. This is very memory-inefficient, and IMHO, also incorrect. Picomatch would return false for both foo and foobar.


Advanced globbing

Extglobs

Pattern Description
@(pattern) Match only one consecutive occurrence of pattern
*(pattern) Match zero or more consecutive occurrences of pattern
+(pattern) Match one or more consecutive occurrences of pattern
?(pattern) Match zero or one consecutive occurrences of pattern
!(pattern) Match anything but pattern

Examples

POSIX brackets

POSIX classes are disabled by default. Enable this feature by setting the posix option to true.

Enable POSIX bracket support

The following named POSIX bracket expressions are supported:

  • [:alnum:] - Alphanumeric characters, equ [a-zA-Z0-9]
  • [:alpha:] - Alphabetical characters, equivalent to [a-zA-Z].
  • [:ascii:] - ASCII characters, equivalent to [\\x00-\\x7F].
  • [:blank:] - Space and tab characters, equivalent to [ \\t].
  • [:cntrl:] - Control characters, equivalent to [\\x00-\\x1F\\x7F].
  • [:digit:] - Numerical digits, equivalent to [0-9].
  • [:graph:] - Graph characters, equivalent to [\\x21-\\x7E].
  • [:lower:] - Lowercase letters, equivalent to [a-z].
  • [:print:] - Print characters, equivalent to [\\x20-\\x7E ].
  • [:punct:] - Punctuation and symbols, equivalent to [\\-!"#$%&\'()\\*+,./:;<=>?@[\\]^_{|}~]`.
  • [:space:] - Extended space characters, equivalent to [ \\t\\r\\n\\v\\f].
  • [:upper:] - Uppercase letters, equivalent to [A-Z].
  • [:word:] - Word characters (letters, numbers and underscores), equivalent to [A-Za-z0-9_].
  • [:xdigit:] - Hexadecimal digits, equivalent to [A-Fa-f0-9].

Braces

Matching special characters as literals

If you wish to match the following special characters in a filepath, and you want to use these characters in your glob pattern, they must be escaped with backslashes or quotes:

Special Characters

Some characters that are used for matching in regular expressions are also regarded as valid file path characters on some platforms.

To match any of the following characters as literals: `$^*+?()

Examples:



Library Comparisons

The following table shows which features are supported by minimatch, micromatch, picomatch, nanomatch, extglob, braces, and expand-brackets.

Feature minimatch micromatch picomatch nanomatch extglob braces expand-brackets
Wildcard matching (*?+) - - -
Advancing globbing - - - -
Brace matching - - -
Brace expansion - - - -
Extglobs partial - - -
Posix brackets - - - -
Regular expression syntax - -
File system operations - - - - - - -



Benchmarks

Performance comparison of picomatch and minimatch.

# .makeRe star
  picomatch x 1,993,050 ops/sec ±0.51% (91 runs sampled)
  minimatch x 627,206 ops/sec ±1.96% (87 runs sampled))

# .makeRe star; dot=true
  picomatch x 1,436,640 ops/sec ±0.62% (91 runs sampled)
  minimatch x 525,876 ops/sec ±0.60% (88 runs sampled)

# .makeRe globstar
  picomatch x 1,592,742 ops/sec ±0.42% (90 runs sampled)
  minimatch x 962,043 ops/sec ±1.76% (91 runs sampled)d)

# .makeRe globstars
  picomatch x 1,615,199 ops/sec ±0.35% (94 runs sampled)
  minimatch x 477,179 ops/sec ±1.33% (91 runs sampled)

# .makeRe with leading star
  picomatch x 1,220,856 ops/sec ±0.40% (92 runs sampled)
  minimatch x 453,564 ops/sec ±1.43% (94 runs sampled)

# .makeRe - basic braces
  picomatch x 392,067 ops/sec ±0.70% (90 runs sampled)
  minimatch x 99,532 ops/sec ±2.03% (87 runs sampled))



Philosophies

The goal of this library is to be blazing fast, without compromising on accuracy.

Accuracy

The number one of goal of this library is accuracy. However, it’s not unusual for different glob implementations to have different rules for matching behavior, even with simple wildcard matching. It gets increasingly more complicated when combinations of different features are combined, like when extglobs are combined with globstars, braces, slashes, and so on: !(**/{a,b,*/c}).

Thus, given that there is no canonical glob specification to use as a single source of truth when differences of opinion arise regarding behavior, sometimes we have to implement our best judgement and rely on feedback from users to make improvements.

Performance

Although this library performs well in benchmarks, and in most cases it’s faster than other popular libraries we benchmarked against, we will always choose accuracy over performance. It’s not helpful to anyone if our library is faster at returning the wrong answer.



About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Author

Jon Schlinkert

RFC6265 Cookies and CookieJar for Node.js

npm package

Build Status



Synopsis



Installation

It’s so easy!

npm install tough-cookie

Why the name? NPM modules cookie, cookies and cookiejar were already taken.



API

tough

Functions on the module you get from require('tough-cookie'). All can be used as pure functions and don’t need to be “bound”.

Note: prior to 1.0.x, several of these functions took a strict parameter. This has since been removed from the API as it was no longer necessary.

parseDate(string)

Parse a cookie date string into a Date. Parses according to RFC6265 Section 5.1.1, not Date.parse().

formatDate(date)

Format a Date into a RFC1123 string (the RFC6265-recommended format).

canonicalDomain(str)

Transforms a domain-name into a canonical domain-name. The canonical domain-name is a trimmed, lowercased, stripped-of-leading-dot and optionally punycode-encoded domain-name (Section 5.1.2 of RFC6265). For the most part, this function is idempotent (can be run again on its output without ill effects).

domainMatch(str,domStr[,canonicalize=true])

Answers “does this real domain match the domain in a cookie?”. The str is the “current” domain-name and the domStr is the “cookie” domain-name. Matches according to RFC6265 Section 5.1.3, but it helps to think of it as a “suffix match”.

The canonicalize parameter will run the other two parameters through canonicalDomain or not.

defaultPath(path)

Given a current request/response path, gives the Path apropriate for storing in a cookie. This is basically the “directory” of a “file” in the path, but is specified by Section 5.1.4 of the RFC.

The path parameter MUST be only the pathname part of a URI (i.e. excludes the hostname, query, fragment, etc.). This is the .pathname property of node’s uri.parse() output.

pathMatch(reqPath,cookiePath)

Answers “does the request-path path-match a given cookie-path?” as per RFC6265 Section 5.1.4. Returns a boolean.

This is essentially a prefix-match where cookiePath is a prefix of reqPath.

parse(cookieString[, options])

alias for Cookie.parse(cookieString[, options])

fromJSON(string)

alias for Cookie.fromJSON(string)

getPublicSuffix(hostname)

Returns the public suffix of this hostname. The public suffix is the shortest domain-name upon which a cookie can be set. Returns null if the hostname cannot have cookies set for it.

For example: www.example.com and www.subdomain.example.com both have public suffix example.com.

For further information, see http://publicsuffix.org/. This module derives its list from that site. This call is currently a wrapper around psl’s get() method.

cookieCompare(a,b)

For use with .sort(), sorts a list of cookies into the recommended order given in the RFC (Section 5.4 step 2). The sort algorithm is, in order of precedence:

  • Longest .path
  • oldest .creation (which has a 1ms precision, same as Date)
  • lowest .creationIndex (to get beyond the 1ms precision)

Note: Since JavaScript’s Date is limited to a 1ms precision, cookies within the same milisecond are entirely possible. This is especially true when using the now option to .setCookie(). The .creationIndex property is a per-process global counter, assigned during construction with new Cookie(). This preserves the spirit of the RFC sorting: older cookies go first. This works great for MemoryCookieStore, since Set-Cookie headers are parsed in order, but may not be so great for distributed systems. Sophisticated Stores may wish to set this to some other logical clock such that if cookies A and B are created in the same millisecond, but cookie A is created before cookie B, then A.creationIndex < B.creationIndex. If you want to alter the global counter, which you probably shouldn’t do, it’s stored in Cookie.cookiesCreated.

permuteDomain(domain)

Generates a list of all possible domains that domainMatch() the parameter. May be handy for implementing cookie stores.

permutePath(path)

Generates a list of all possible paths that pathMatch() the parameter. May be handy for implementing cookie stores.

Exported via tough.Cookie.

Cookie.parse(cookieString[, options])

Parses a single Cookie or Set-Cookie HTTP header into a Cookie object. Returns undefined if the string can’t be parsed.

The options parameter is not required and currently has only one property:

  • loose - boolean - if true enable parsing of key-less cookies like =abc and =, which are not RFC-compliant.

If options is not an object, it is ignored, which means you can use Array#map with it.

Here’s how to process the Set-Cookie header(s) on a node HTTP/HTTPS response:

Note: in version 2.3.3, tough-cookie limited the number of spaces before the = to 256 characters. This limitation has since been removed. See Issue 92

Properties

Cookie object properties:

  • key - string - the name or key of the cookie (default "")
  • value - string - the value of the cookie (default "")
  • expires - Date - if set, the Expires= attribute of the cookie (defaults to the string "Infinity"). See setExpires()
  • maxAge - seconds - if set, the Max-Age= attribute in seconds of the cookie. May also be set to strings "Infinity" and "-Infinity" for non-expiry and immediate-expiry, respectively. See setMaxAge()
  • domain - string - the Domain= attribute of the cookie
  • path - string - the Path= of the cookie
  • secure - boolean - the Secure cookie flag
  • httpOnly - boolean - the HttpOnly cookie flag
  • extensions - Array - any unrecognized cookie attributes as strings (even if equal-signs inside)
  • creation - Date - when this cookie was constructed
  • creationIndex - number - set at construction, used to provide greater sort precision (please see cookieCompare(a,b) for a full explanation)

After a cookie has been passed through CookieJar.setCookie() it will have the following additional attributes:

  • hostOnly - boolean - is this a host-only cookie (i.e. no Domain field was set, but was instead implied)
  • pathIsDefault - boolean - if true, there was no Path field on the cookie and defaultPath() was used to derive one.
  • creation - Date - modified from construction to when the cookie was added to the jar
  • lastAccessed - Date - last time the cookie got accessed. Will affect cookie cleaning once implemented. Using cookiejar.getCookies(...) will update this attribute.

Cookie([{properties}])

Receives an options object that can contain any of the above Cookie properties, uses the default for unspecified properties.

.toString()

encode to a Set-Cookie header value. The Expires cookie field is set using formatDate(), but is omitted entirely if .expires is Infinity.

.cookieString()

encode to a Cookie header value (i.e. the .key and .value properties joined with ‘=’).

.setExpires(String)

sets the expiry based on a date-string passed through parseDate(). If parseDate returns null (i.e. can’t parse this date string), .expires is set to "Infinity" (a string) is set.

.setMaxAge(number)

sets the maxAge in seconds. Coerces -Infinity to "-Infinity" and Infinity to "Infinity" so it JSON serializes correctly.

.expiryTime([now=Date.now()])

.expiryDate([now=Date.now()])

expiryTime() Computes the absolute unix-epoch milliseconds that this cookie expires. expiryDate() works similarly, except it returns a Date object. Note that in both cases the now parameter should be milliseconds.

Max-Age takes precedence over Expires (as per the RFC). The .creation attribute – or, by default, the now parameter – is used to offset the .maxAge attribute.

If Expires (.expires) is set, that’s returned.

Otherwise, expiryTime() returns Infinity and expiryDate() returns a Date object for “Tue, 19 Jan 2038 03:14:07 GMT” (latest date that can be expressed by a 32-bit time_t; the common limit for most user-agents).

.TTL([now=Date.now()])

compute the TTL relative to now (milliseconds). The same precedence rules as for expiryTime/expiryDate apply.

The “number” Infinity is returned for cookies without an explicit expiry and 0 is returned if the cookie is expired. Otherwise a time-to-live in milliseconds is returned.

.canonicalizedDomain()

.cdomain()

return the canonicalized .domain field. This is lower-cased and punycode (RFC3490) encoded if the domain has any non-ASCII characters.

.toJSON()

For convenience in using JSON.serialize(cookie). Returns a plain-old Object that can be JSON-serialized.

Any Date properties (i.e., .expires, .creation, and .lastAccessed) are exported in ISO format (.toISOString()).

NOTE: Custom Cookie properties will be discarded. In tough-cookie 1.x, since there was no .toJSON method explicitly defined, all enumerable properties were captured. If you want a property to be serialized, add the property name to the Cookie.serializableProperties Array.

Cookie.fromJSON(strOrObj)

Does the reverse of cookie.toJSON(). If passed a string, will JSON.parse() that first.

Any Date properties (i.e., .expires, .creation, and .lastAccessed) are parsed via Date.parse(), not the tough-cookie parseDate, since it’s JavaScript/JSON-y timestamps being handled at this layer.

Returns null upon JSON parsing error.

.clone()

Does a deep clone of this cookie, exactly implemented as Cookie.fromJSON(cookie.toJSON()).

.validate()

Status: IN PROGRESS. Works for a few things, but is by no means comprehensive.

validates cookie attributes for semantic correctness. Useful for “lint” checking any Set-Cookie headers you generate. For now, it returns a boolean, but eventually could return a reason string – you can future-proof with this construct:

CookieJar

Exported via tough.CookieJar.

CookieJar([store],[options])

Simply use new CookieJar(). If you’d like to use a custom store, pass that to the constructor otherwise a MemoryCookieStore will be created and used.

The options object can be omitted and can have the following properties:

  • rejectPublicSuffixes - boolean - default true - reject cookies with domains like “com” and “co.uk”
  • looseMode - boolean - default false - accept malformed cookies like bar and =bar, which have an implied empty name. This is not in the standard, but is used sometimes on the web and is accepted by (most) browsers.

Since eventually this module would like to support database/remote/etc. CookieJars, continuation passing style is used for CookieJar methods.

.setCookie(cookieOrString, currentUrl, [{options},] cb(err,cookie))

Attempt to set the cookie in the cookie jar. If the operation fails, an error will be given to the callback cb, otherwise the cookie is passed through. The cookie will have updated .creation, .lastAccessed and .hostOnly properties.

The options object can be omitted and can have the following properties:

  • http - boolean - default true - indicates if this is an HTTP or non-HTTP API. Affects HttpOnly cookies.
  • secure - boolean - autodetect from url - indicates if this is a “Secure” API. If the currentUrl starts with https: or wss: then this is defaulted to true, otherwise false.
  • now - Date - default new Date() - what to use for the creation/access time of cookies
  • ignoreError - boolean - default false - silently ignore things like parse errors and invalid domains. Store errors aren’t ignored by this option.

As per the RFC, the .hostOnly property is set if there was no “Domain=” parameter in the cookie string (or .domain was null on the Cookie object). The .domain property is set to the fully-qualified hostname of currentUrl in this case. Matching this cookie requires an exact hostname match (not a domainMatch as per usual).

.setCookieSync(cookieOrString, currentUrl, [{options}])

Synchronous version of setCookie; only works with synchronous stores (e.g. the default MemoryCookieStore).

.getCookies(currentUrl, [{options},] cb(err,cookies))

Retrieve the list of cookies that can be sent in a Cookie header for the current url.

If an error is encountered, that’s passed as err to the callback, otherwise an Array of Cookie objects is passed. The array is sorted with cookieCompare() unless the {sort:false} option is given.

The options object can be omitted and can have the following properties:

  • http - boolean - default true - indicates if this is an HTTP or non-HTTP API. Affects HttpOnly cookies.
  • secure - boolean - autodetect from url - indicates if this is a “Secure” API. If the currentUrl starts with https: or wss: then this is defaulted to true, otherwise false.
  • now - Date - default new Date() - what to use for the creation/access time of cookies
  • expire - boolean - default true - perform expiry-time checking of cookies and asynchronously remove expired cookies from the store. Using false will return expired cookies and not remove them from the store (which is useful for replaying Set-Cookie headers, potentially).
  • allPaths - boolean - default false - if true, do not scope cookies by path. The default uses RFC-compliant path scoping. Note: may not be supported by the underlying store (the default MemoryCookieStore supports it).

The .lastAccessed property of the returned cookies will have been updated.

.getCookiesSync(currentUrl, [{options}])

Synchronous version of getCookies; only works with synchronous stores (e.g. the default MemoryCookieStore).

.getCookieString(...)

Accepts the same options as .getCookies() but passes a string suitable for a Cookie header rather than an array to the callback. Simply maps the Cookie array via .cookieString().

.getCookieStringSync(...)

Synchronous version of getCookieString; only works with synchronous stores (e.g. the default MemoryCookieStore).

.getSetCookieStrings(...)

Returns an array of strings suitable for Set-Cookie headers. Accepts the same options as .getCookies(). Simply maps the cookie array via .toString().

.getSetCookieStringsSync(...)

Synchronous version of getSetCookieStrings; only works with synchronous stores (e.g. the default MemoryCookieStore).

.serialize(cb(err,serializedObject))

Serialize the Jar if the underlying store supports .getAllCookies.

NOTE: Custom Cookie properties will be discarded. If you want a property to be serialized, add the property name to the Cookie.serializableProperties Array.

See Serialization Format.

.serializeSync()

Sync version of .serialize

.toJSON()

Alias of .serializeSync() for the convenience of JSON.stringify(cookiejar).

CookieJar.deserialize(serialized, [store], cb(err,object))

A new Jar is created and the serialized Cookies are added to the underlying store. Each Cookie is added via store.putCookie in the order in which they appear in the serialization.

The store argument is optional, but should be an instance of Store. By default, a new instance of MemoryCookieStore is created.

As a convenience, if serialized is a string, it is passed through JSON.parse first. If that throws an error, this is passed to the callback.

CookieJar.deserializeSync(serialized, [store])

Sync version of .deserialize. Note that the store must be synchronous for this to work.

CookieJar.fromJSON(string)

Alias of .deserializeSync to provide consistency with Cookie.fromJSON().

.clone([store,]cb(err,newJar))

Produces a deep clone of this jar. Modifications to the original won’t affect the clone, and vice versa.

The store argument is optional, but should be an instance of Store. By default, a new instance of MemoryCookieStore is created. Transferring between store types is supported so long as the source implements .getAllCookies() and the destination implements .putCookie().

.cloneSync([store])

Synchronous version of .clone, returning a new CookieJar instance.

The store argument is optional, but must be a synchronous Store instance if specified. If not passed, a new instance of MemoryCookieStore is used.

The source and destination must both be synchronous Stores. If one or both stores are asynchronous, use .clone instead. Recall that MemoryCookieStore supports both synchronous and asynchronous API calls.

.removeAllCookies(cb(err))

Removes all cookies from the jar.

This is a new backwards-compatible feature of tough-cookie version 2.5, so not all Stores will implement it efficiently. For Stores that do not implement removeAllCookies, the fallback is to call removeCookie after getAllCookies. If getAllCookies fails or isn’t implemented in the Store, that error is returned. If one or more of the removeCookie calls fail, only the first error is returned.

.removeAllCookiesSync()

Sync version of .removeAllCookies()

Store

Base class for CookieJar stores. Available as tough.Store.

Store API

The storage model for each CookieJar instance can be replaced with a custom implementation. The default is MemoryCookieStore which can be found in the lib/memstore.js file. The API uses continuation-passing-style to allow for asynchronous stores.

Stores should inherit from the base Store class, which is available as require('tough-cookie').Store.

Stores are asynchronous by default, but if store.synchronous is set to true, then the *Sync methods on the of the containing CookieJar can be used (however, the continuation-passing style

All domain parameters will have been normalized before calling.

The Cookie store must have all of the following methods.

store.findCookie(domain, path, key, cb(err,cookie))

Retrieve a cookie with the given domain, path and key (a.k.a. name). The RFC maintains that exactly one of these cookies should exist in a store. If the store is using versioning, this means that the latest/newest such cookie should be returned.

Callback takes an error and the resulting Cookie object. If no cookie is found then null MUST be passed instead (i.e. not an error).

store.findCookies(domain, path, cb(err,cookies))

Locates cookies matching the given domain and path. This is most often called in the context of cookiejar.getCookies() above.

If no cookies are found, the callback MUST be passed an empty array.

The resulting list will be checked for applicability to the current request according to the RFC (domain-match, path-match, http-only-flag, secure-flag, expiry, etc.), so it’s OK to use an optimistic search algorithm when implementing this method. However, the search algorithm used SHOULD try to find cookies that domainMatch() the domain and pathMatch() the path in order to limit the amount of checking that needs to be done.

As of version 0.9.12, the allPaths option to cookiejar.getCookies() above will cause the path here to be null. If the path is null, path-matching MUST NOT be performed (i.e. domain-matching only).

store.putCookie(cookie, cb(err))

Adds a new cookie to the store. The implementation SHOULD replace any existing cookie with the same .domain, .path, and .key properties – depending on the nature of the implementation, it’s possible that between the call to fetchCookie and putCookie that a duplicate putCookie can occur.

The cookie object MUST NOT be modified; the caller will have already updated the .creation and .lastAccessed properties.

Pass an error if the cookie cannot be stored.

store.updateCookie(oldCookie, newCookie, cb(err))

Update an existing cookie. The implementation MUST update the .value for a cookie with the same domain, .path and .key. The implementation SHOULD check that the old value in the store is equivalent to oldCookie - how the conflict is resolved is up to the store.

The .lastAccessed property will always be different between the two objects (to the precision possible via JavaScript’s clock). Both .creation and .creationIndex are guaranteed to be the same. Stores MAY ignore or defer the .lastAccessed change at the cost of affecting how cookies are selected for automatic deletion (e.g., least-recently-used, which is up to the store to implement).

Stores may wish to optimize changing the .value of the cookie in the store versus storing a new cookie. If the implementation doesn’t define this method a stub that calls putCookie(newCookie,cb) will be added to the store object.

The newCookie and oldCookie objects MUST NOT be modified.

Pass an error if the newCookie cannot be stored.

store.removeCookie(domain, path, key, cb(err))

Remove a cookie from the store (see notes on findCookie about the uniqueness constraint).

The implementation MUST NOT pass an error if the cookie doesn’t exist; only pass an error due to the failure to remove an existing cookie.

store.removeCookies(domain, path, cb(err))

Removes matching cookies from the store. The path parameter is optional, and if missing means all paths in a domain should be removed.

Pass an error ONLY if removing any existing cookies failed.

store.removeAllCookies(cb(err))

Optional. Removes all cookies from the store.

Pass an error if one or more cookies can’t be removed.

Note: New method as of tough-cookie version 2.5, so not all Stores will implement this, plus some stores may choose not to implement this.

store.getAllCookies(cb(err, cookies))

Optional. Produces an Array of all cookies during jar.serialize(). The items in the array can be true Cookie objects or generic Objects with the Serialization Format data structure.

Cookies SHOULD be returned in creation order to preserve sorting via compareCookies(). For reference, MemoryCookieStore will sort by .creationIndex since it uses true Cookie objects internally. If you don’t return the cookies in creation order, they’ll still be sorted by creation time, but this only has a precision of 1ms. See compareCookies for more detail.

Pass an error if retrieval fails.

Note: not all Stores can implement this due to technical limitations, so it is optional.

MemoryCookieStore

Inherits from Store.

These are some Store implementations authored and maintained by the community. They aren’t official and we don’t vouch for them but you may be interested to have a look:



Serialization Format

NOTE: if you want to have custom Cookie properties serialized, add the property name to Cookie.serializableProperties.



BSD-3-Clause:

All rights reserved. Redistribution and use in source and binary forms, with or without this list of conditions and the following disclaimer. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Salesforce.com nor the names of its contributors may specific prior written permission. LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS POSSIBILITY OF SUCH DAMAGE.

nanomatch NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash 4.3 wildcard support only (no support for exglobs, posix brackets or braces)

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Table of Contents

Details

Install

Install with npm:

Release history

History

key

Changelog entries are classified using the following labels (from keep-a-changelog):

  • added: for new features
  • changed: for changes in existing functionality
  • deprecated: for once-stable features removed in upcoming releases
  • removed: for deprecated features removed in this release
  • fixed: for any bug fixes
  • bumped: updated dependencies, only minor or higher will be listed.

1.1.0 - 2017-04-11

Fixed

  • adds support for unclosed quotes

Added

  • adds support for options.noglobstar

1.0.4 - 2017-04-06

Housekeeping updates. Adds documentation section about escaping, cleans up utils.

1.0.3 - 2017-04-06

This release includes fixes for windows path edge cases and other improvements for stricter adherence to bash spec.

Fixed

  • More windows path edge cases

Added

1.0.1 - 2016-12-12

Added

1.0.0 - 2016-12-12

Stable release.

[0.1.0] - 2016-10-08

First release.

What is nanomatch?

Nanomatch is a fast and accurate glob matcher with full support for standard Bash glob features, including the following “metacharacters”: *, **, ? and [...].

Learn more

  • Getting started: learn how to install and begin using nanomatch
  • Features: jump to info about supported patterns, and a glob matching reference
  • API documentation: jump to available options and methods
  • Unit tests: visit unit tests. there is no better way to learn a code library than spending time the unit tests. Nanomatch has 36,000 unit tests - go become a glob matching ninja!

How is this different?

Speed and accuracy

Nanomatch uses snapdragon for parsing and compiling globs, which results in:

  • Granular control over the entire conversion process in a way that is easy to understand, reason about, and customize.
  • Faster matching, from a combination of optimized glob patterns and (optional) caching.
  • Much greater accuracy than minimatch. In fact, nanomatch passes all of the spec tests from bash, including some that bash still fails. However, since there is no real specification for globs, if you encounter a pattern that yields unexpected match results after researching previous issues, please let us know.

Basic globbing only

Nanomatch supports basic globbing only, which is limited to *, **, ? and regex-like brackets.

If you need support for the other bash “expansion” types (in addition to the wildcard matching provided by nanomatch), consider using micromatch instead. (micromatch >=3.0.0 uses the nanomatch parser and compiler for basic glob matching)

Getting started

Installing nanomatch

Install with yarn

Install with npm

Usage

Add nanomatch to your project using node’s require() system:

Params

  • list {String|Array}: List of strings to perform matches against. This is often a list of file paths.
  • patterns {String|Array}: One or more glob paterns to use for matching.
  • options {Object}: Any supported options may be passed

Examples

See the API documentation for available methods and options.

Documentation

Escaping

Backslashes and quotes can be used to escape characters, forcing nanomatch to regard those characters as a literal characters.

Backslashes

Use backslashes to escape single characters. For example, the following pattern would match foo/*/bar exactly:

The following pattern would match foo/ followed by a literal *, followed by zero or more of any characters besides /, followed by /bar.

Quoted strings

Use single or double quotes to escape sequences of characters. For example, the following patterns would match foo/**/bar exactly:

Matching literal quotes

If you need to match quotes literally, you can escape them as well. For example, the following will match foo/"*"/bar, foo/"a"/bar, foo/"b"/bar, or foo/"c"/bar:

And the following will match foo/'*'/bar, foo/'a'/bar, foo/'b'/bar, or foo/'c'/bar:

API

nanomatch

The main function takes a list of strings and one or more glob patterns to use for matching.

Params

  • list {Array}: A list of strings to match
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of matches

Example

.match

Similar to the main function, but pattern must be a string.

Params

  • list {Array}: Array of strings to match
  • pattern {String}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of matches

Example

.isMatch

Returns true if the specified string matches the given glob pattern.

Params

  • string {String}: String to match
  • pattern {String}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if the string matches the glob pattern.

Example

.some

Returns true if some of the elements in the given list match any of the given glob patterns.

Params

  • list {String|Array}: The string or array of strings to test. Returns as soon as the first match is found.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.every

Returns true if every element in the given list matches at least one of the given glob patterns.

Params

  • list {String|Array}: The string or array of strings to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.any

Returns true if any of the given glob patterns match the specified string.

Params

  • str {String|Array}: The string to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.all

Returns true if all of the given patterns match the specified string.

Params

  • str {String|Array}: The string to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.not

Returns a list of strings that do not match any of the given patterns.

Params

  • list {Array}: Array of strings to match.
  • patterns {String|Array}: One or more glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of strings that do not match the given patterns.

Example

.contains

Returns true if the given string contains the given pattern. Similar to .isMatch but the pattern can match any part of the string.

Params

  • str {String}: The string to match.
  • patterns {String|Array}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if the patter matches any part of str.

Example

.matchKeys

Filter the keys of the given object with the given glob pattern and options. Does not attempt to match nested keys. If you need this feature, use glob-object instead.

Params

  • object {Object}: The object with keys to filter.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Object}: Returns an object with only keys that match the given patterns.

Example

.matcher

Returns a memoized matcher function from the given glob pattern and options. The returned function takes a string to match as its only argument and returns true if the string is a match.

Params

  • pattern {String}: Glob pattern
  • options {Object}: See available options for changing how matches are performed.
  • returns {Function}: Returns a matcher function.

Example

.capture

Returns an array of matches captured by pattern in string, ornull` if the pattern did not match.

Params

  • pattern {String}: Glob pattern to use for matching.
  • string {String}: String to match
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns an array of captures if the string matches the glob pattern, otherwise null.

Example

.makeRe

Create a regular expression from the given glob pattern.

Params

  • pattern {String}: A glob pattern to convert to regex.
  • options {Object}: See available options for changing how matches are performed.
  • returns {RegExp}: Returns a regex created from the given pattern.

Example

.create

Parses the given glob pattern and returns an object with the compiled output and optional source map.

Params

  • pattern {String}: Glob pattern to parse and compile.
  • options {Object}: Any options to change how parsing and compiling is performed.
  • returns {Object}: Returns an object with the parsed AST, compiled string and optional source map.

Example

.parse

Parse the given str with the given options.

Params

  • str {String}
  • options {Object}
  • returns {Object}: Returns an AST

Example

.compile

Compile the given ast or string with the given options.

Params

  • ast {Object|String}
  • options {Object}
  • returns {Object}: Returns an object that has an output property with the compiled string.

Example

.clearCache

Clear the regex cache.

Example

Options

basename

options.basename

Allow glob patterns without slashes to match a file path based on its basename. Same behavior as minimatch option matchBase.

Type: boolean

Default: false

Example

bash

options.bash

Enabled by default, this option enforces bash-like behavior with stars immediately following a bracket expression. Bash bracket expressions are similar to regex character classes, but unlike regex, a star following a bracket expression does not repeat the bracketed characters. Instead, the star is treated the same as an other star.

Type: boolean

Default: true

Example

cache

options.cache

Disable regex and function memoization.

Type: boolean

Default: undefined

dot

options.dot

Match dotfiles. Same behavior as minimatch option dot.

Type: boolean

Default: false

failglob

options.failglob

Similar to the --failglob behavior in Bash, throws an error when no matches are found.

Type: boolean

Default: undefined

ignore

options.ignore

String or array of glob patterns to match files to ignore.

Type: String|Array

Default: undefined

matchBase

options.matchBase

Alias for options.basename.

nocase

options.nocase

Use a case-insensitive regex for matching files. Same behavior as minimatch.

Type: boolean

Default: undefined

nodupes

options.nodupes

Remove duplicate elements from the result array.

Type: boolean

Default: true (enabled by default)

Example

Example of using the unescape and nodupes options together:

nonegate

options.noglobstar

Disable matching with globstars (**).

Type: boolean

Default: undefined

nonegate

options.nonegate

Disallow negation (!) patterns, and treat leading ! as a literal character to match.

Type: boolean

Default: undefined

nonull

options.nonull

Alias for options.nullglob.

nullglob

options.nullglob

If true, when no matches are found the actual (arrayified) glob pattern is returned instead of an empty array. Same behavior as minimatch option nonull.

Type: boolean

Default: undefined

slash

options.slash

Customize the slash character(s) to use for matching.

Type: string|function

Default: [/\\] (forward slash and backslash)

star

options.star

Customize the star character(s) to use for matching. It’s not recommended that you modify this unless you have advanced knowledge of the compiler and matching rules.

Type: string|function

Default: [^/\\]*?

snapdragon

options.snapdragon

Pass your own instance of snapdragon to customize parsers or compilers.

Type: object

Default: undefined

snapdragon

options.sourcemap

Generate a source map by enabling the sourcemap option with the .parse, .compile, or .create methods.

Examples

unescape

options.unescape

Remove backslashes from returned matches.

Type: boolean

Default: undefined

Example

In this example we want to match a literal *:

unixify

options.unixify

Convert path separators on returned files to posix/unix-style forward slashes.

Type: boolean

Default: true

Example

Features

Nanomatch has full support for standard Bash glob features, including the following “metacharacters”: *, **, ? and [...].

Here are some examples of how they work:

Pattern Description
* Matches any string except for /, leading ., or /. inside a path
** Matches any string including /, but not a leading . or /. inside a path. More than two stars (e.g. *** is treated the same as one star, and ** loses its special meaning
foo* Matches any string beginning with foo
*bar* Matches any string containing bar (beginning, middle or end)
*.min.js Matches any string ending with .min.js
[abc]*.js Matches any string beginning with a, b, or c and ending with .js
abc? Matches abcd or abcz but not abcde

The exceptions noted for * apply to all patterns that contain a *.

Not supported

The following extended-globbing features are not supported:

If you need any of these features consider using micromatch instead.

Bash expansion libs

Related library Matching Type Example Description
nanomatch (you are here) Wildcards *
expand-tilde Tildes ~
braces Braces {a,b,c}
expand-brackets Brackets [[:alpha:]]
extglob Parens !(a\ | b)
micromatch All all Micromatch is built on top of the other libraries.

There are many resources available on the web if you want to dive deeper into how these features work in Bash.

Benchmarks

Running benchmarks

Install dev dependencies:

Nanomatch vs. Minimatch vs. Multimatch

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • extglob: Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… more | homepage
  • is-extglob: Returns true if a string has an extglob. | homepage
  • is-glob: Returns true if the given string looks like a glob pattern or an extglob pattern… more | homepage
  • micromatch: Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch. | homepage
Commits Contributor
164 jonschlinkert
1 devongovett

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 18, 2018. //: # “This README.md file is auto-generated, all changes to this file will be lost.” //: # “To regenerate it, use python -m synthtool.” Google Cloud Platform logo



Google Cloud Storage: Node.js Client

release level npm version codecov

Node.js idiomatic client for Cloud Storage.

Cloud Storage allows world-wide storage and retrieval of any amount of data at any time. You can use Google Cloud Storage for a range of scenarios including serving website content, storing data for archival and disaster recovery, or distributing large data objects to users via direct download.

A comprehensive list of changes in each version may be found in the CHANGELOG.

Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.

Table of contents:

Quickstart

Before you begin

  1. Select or create a Cloud Platform project.
  2. Enable billing for your project.
  3. Enable the Google Cloud Storage API.
  4. Set up authentication with a service account so you can access the API from your local workstation.

Installing the client library

Using the client library

Samples

Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.

Sample Source Code Try it
Add Bucket Conditional Binding source code Open in Cloud Shell
Add Bucket Default Owner Acl source code Open in Cloud Shell
Add Bucket Iam Member source code Open in Cloud Shell
Add Bucket Owner Acl source code Open in Cloud Shell
Add File Owner Acl source code Open in Cloud Shell
Storage Get Bucket Metadata. source code Open in Cloud Shell
Storage Configure Bucket Cors. source code Open in Cloud Shell
Copy File source code Open in Cloud Shell
Copy Old Version Of File. source code Open in Cloud Shell
Create New Bucket source code Open in Cloud Shell
Create Notification source code Open in Cloud Shell
Delete Bucket source code Open in Cloud Shell
Delete File source code Open in Cloud Shell
Delete Notification source code Open in Cloud Shell
Delete Old Version Of File. source code Open in Cloud Shell
Disable Bucket Lifecycle Management source code Open in Cloud Shell
Disable Default Event Based Hold source code Open in Cloud Shell
Disable Requester Pays source code Open in Cloud Shell
Disable Uniform Bucket Level Access source code Open in Cloud Shell
Download Encrypted File source code Open in Cloud Shell
Download File source code Open in Cloud Shell
Download File Using Requester Pays source code Open in Cloud Shell
Enable Bucket Lifecycle Management source code Open in Cloud Shell
Enable Default Event Based Hold source code Open in Cloud Shell
Enable Default KMS Key source code Open in Cloud Shell
Enable Requester Pays source code Open in Cloud Shell
Enable Uniform Bucket Level Access source code Open in Cloud Shell
Storage Set File Metadata. source code Open in Cloud Shell
Generate Encryption Key source code Open in Cloud Shell
Generate Signed Url source code Open in Cloud Shell
Generate V4 Read Signed Url source code Open in Cloud Shell
Generate V4 Signed Policy source code Open in Cloud Shell
Generate V4 Upload Signed Url source code Open in Cloud Shell
Get Default Event Based Hold source code Open in Cloud Shell
Get Metadata source code Open in Cloud Shell
Get Metadata Notifications source code Open in Cloud Shell
Get Requester Pays Status source code Open in Cloud Shell
Get Retention Policy source code Open in Cloud Shell
Get Uniform Bucket Level Access source code Open in Cloud Shell
Activate HMAC SA Key. source code Open in Cloud Shell
Create HMAC SA Key. source code Open in Cloud Shell
Deactivate HMAC SA Key. source code Open in Cloud Shell
Delete HMAC SA Key. source code Open in Cloud Shell
Get HMAC SA Key Metadata. source code Open in Cloud Shell
List HMAC SA Keys Metadata. source code Open in Cloud Shell
List Buckets source code Open in Cloud Shell
List Files source code Open in Cloud Shell
List Files By Prefix source code Open in Cloud Shell
List Files Paginate source code Open in Cloud Shell
List Files with Old Versions. source code Open in Cloud Shell
List Notifications source code Open in Cloud Shell
Lock Retention Policy source code Open in Cloud Shell
Make Public source code Open in Cloud Shell
Move File source code Open in Cloud Shell
Notifications source code Open in Cloud Shell
Print Bucket Acl source code Open in Cloud Shell
Print Bucket Acl For User source code Open in Cloud Shell
Print File Acl source code Open in Cloud Shell
Print File Acl For User source code Open in Cloud Shell
Quickstart source code Open in Cloud Shell
Release Event Based Hold source code Open in Cloud Shell
Release Temporary Hold source code Open in Cloud Shell
Remove Bucket Conditional Binding source code Open in Cloud Shell
Storage Remove Bucket Cors Configuration. source code Open in Cloud Shell
Remove Bucket Default Owner source code Open in Cloud Shell
Remove Bucket Iam Member source code Open in Cloud Shell
Remove Bucket Owner Acl source code Open in Cloud Shell
Remove File Owner Acl source code Open in Cloud Shell
Remove Retention Policy source code Open in Cloud Shell
Rename File source code Open in Cloud Shell
Rotate Encryption Key source code Open in Cloud Shell
Set Event Based Hold source code Open in Cloud Shell
Set Retention Policy source code Open in Cloud Shell
Set Temporary Hold source code Open in Cloud Shell
Upload a directory to a bucket. source code Open in Cloud Shell
Upload Encrypted File source code Open in Cloud Shell
Upload File source code Open in Cloud Shell
Upload File With Kms Key source code Open in Cloud Shell
View Bucket Iam Members source code Open in Cloud Shell

The Google Cloud Storage Node.js Client API Reference documentation also contains samples.

Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.

Client libraries targeting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).

Legacy Node.js versions are supported as a best effort:

  • Legacy versions will not be tested in continuous integration.
  • Some security patches may not be able to be backported.
  • Dependencies will not be kept up-to-date, and features will not be backported.

Legacy tags available

  • legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.

Versioning

This library follows Semantic Versioning.

This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.

More Information: Google Cloud Platform Launch Stages

Contributing

Contributions welcome! See the Contributing Guide.

Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.

Apache Version 2.0

See LICENSE



ESLint Plugin TypeScript

An ESLint plugin which provides lint rules for TypeScript codebases.

CI NPM Version NPM Downloads

Getting Started

These docs walk you through setting up ESLint, this plugin, and our parser. If you know what you’re doing and just want to quick start, read on…

Quick-start

Installation

Make sure you have TypeScript and @typescript-eslint/parser installed:

Then install the plugin:

It is important that you use the same version number for @typescript-eslint/parser and @typescript-eslint/eslint-plugin.

Note: If you installed ESLint globally (using the -g flag) then you must also install @typescript-eslint/eslint-plugin globally.

Usage

Add @typescript-eslint/parser to the parser field and @typescript-eslint to the plugins section of your .eslintrc configuration file, then configure the rules you want to use under the rules section.

You can also enable all the recommended rules for our plugin. Add plugin:@typescript-eslint/recommended in extends:

Note: Make sure to use eslint --ext .js,.ts since by default eslint will only search for .js files.

You can also use eslint:recommended (the set of rules which are recommended for all projects by the ESLint Team) with this plugin:

As of version 2 of this plugin, by design, none of the rules in the main recommended config require type-checking in order to run. This means that they are more lightweight and faster to run.

Some highly valuable rules simply require type-checking in order to be implemented correctly, however, so we provide an additional config you can extend from called recommended-requiring-type-checking. You would apply this in addition to the recommended configs previously mentioned, e.g.:

Pro Tip: For larger codebases you may want to consider splitting our linting into two separate stages: 1. fast feedback rules which operate purely based on syntax (no type-checking), 2. rules which are based on semantics (type-checking).

You can read more about linting with type information here

Key: :heavy_check_mark: = recommended, :wrench: = fixable, :thought_balloon: = requires type information

Name Description :heavy_check_mark: :wrench: :thought_balloon:
@typescript-eslint/adjacent-overload-signatures Require that member overloads be consecutive :heavy_check_mark:
@typescript-eslint/array-type Requires using either T[] or Array<T> for arrays :wrench:
@typescript-eslint/await-thenable Disallows awaiting a value that is not a Thenable :heavy_check_mark: :thought_balloon:
@typescript-eslint/ban-ts-comment Bans @ts-<directive> comments from being used or requires descriptions after directive :heavy_check_mark:
@typescript-eslint/ban-tslint-comment Bans // tslint:<rule-flag> comments from being used :wrench:
@typescript-eslint/ban-types Bans specific types from being used :heavy_check_mark: :wrench:
@typescript-eslint/class-literal-property-style Ensures that literals on classes are exposed in a consistent style :wrench:
@typescript-eslint/consistent-indexed-object-style Enforce or disallow the use of the record type :wrench:
@typescript-eslint/consistent-type-assertions Enforces consistent usage of type assertions
@typescript-eslint/consistent-type-definitions Consistent with type definition either interface or type :wrench:
@typescript-eslint/consistent-type-imports Enforces consistent usage of type imports :wrench:
@typescript-eslint/explicit-function-return-type Require explicit return types on functions and class methods
@typescript-eslint/explicit-member-accessibility Require explicit accessibility modifiers on class properties and methods :wrench:
@typescript-eslint/explicit-module-boundary-types Require explicit return and argument types on exported functions’ and classes’ public class methods :heavy_check_mark:
@typescript-eslint/member-delimiter-style Require a specific member delimiter style for interfaces and type literals :wrench:
@typescript-eslint/member-ordering Require a consistent member declaration order
@typescript-eslint/method-signature-style Enforces using a particular method signature syntax. :wrench:
@typescript-eslint/naming-convention Enforces naming conventions for everything across a codebase :thought_balloon:
@typescript-eslint/no-base-to-string Requires that .toString() is only called on objects which provide useful information when stringified :thought_balloon:
@typescript-eslint/no-confusing-non-null-assertion Disallow non-null assertion in locations that may be confusing :wrench:
@typescript-eslint/no-confusing-void-expression Requires expressions of type void to appear in statement position :wrench: :thought_balloon:
@typescript-eslint/no-dynamic-delete Disallow the delete operator with computed key expressions :wrench:
@typescript-eslint/no-empty-interface Disallow the declaration of empty interfaces :heavy_check_mark: :wrench:
@typescript-eslint/no-explicit-any Disallow usage of the any type :heavy_check_mark: :wrench:
@typescript-eslint/no-extra-non-null-assertion Disallow extra non-null assertion :heavy_check_mark: :wrench:
@typescript-eslint/no-extraneous-class Forbids the use of classes as namespaces
@typescript-eslint/no-floating-promises Requires Promise-like values to be handled appropriately :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-for-in-array Disallow iterating over an array with a for-in loop :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-implicit-any-catch Disallow usage of the implicit any type in catch clauses :wrench:
@typescript-eslint/no-inferrable-types Disallows explicit type declarations for variables or parameters initialized to a number, string, or boolean :heavy_check_mark: :wrench:
@typescript-eslint/no-invalid-void-type Disallows usage of void type outside of generic or return types
@typescript-eslint/no-misused-new Enforce valid definition of new and constructor :heavy_check_mark:
@typescript-eslint/no-misused-promises Avoid using promises in places not designed to handle them :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-namespace Disallow the use of custom TypeScript modules and namespaces :heavy_check_mark:
@typescript-eslint/no-non-null-asserted-optional-chain Disallows using a non-null assertion after an optional chain expression :heavy_check_mark:
@typescript-eslint/no-non-null-assertion Disallows non-null assertions using the ! postfix operator :heavy_check_mark:
@typescript-eslint/no-parameter-properties Disallow the use of parameter properties in class constructors
@typescript-eslint/no-require-imports Disallows invocation of require()
@typescript-eslint/no-this-alias Disallow aliasing this :heavy_check_mark:
@typescript-eslint/no-type-alias Disallow the use of type aliases
@typescript-eslint/no-unnecessary-boolean-literal-compare Flags unnecessary equality comparisons against boolean literals :wrench: :thought_balloon:
@typescript-eslint/no-unnecessary-condition Prevents conditionals where the type is always truthy or always falsy :wrench: :thought_balloon:
@typescript-eslint/no-unnecessary-qualifier Warns when a namespace qualifier is unnecessary :wrench: :thought_balloon:
@typescript-eslint/no-unnecessary-type-arguments Enforces that type arguments will not be used if not required :wrench: :thought_balloon:
@typescript-eslint/no-unnecessary-type-assertion Warns if a type assertion does not change the type of an expression :heavy_check_mark: :wrench: :thought_balloon:
@typescript-eslint/no-unnecessary-type-constraint Disallows unnecessary constraints on generic types :wrench:
@typescript-eslint/no-unsafe-assignment Disallows assigning any to variables and properties :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-unsafe-call Disallows calling an any type value :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-unsafe-member-access Disallows member access on any typed variables :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-unsafe-return Disallows returning any from a function :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-var-requires Disallows the use of require statements except in import statements :heavy_check_mark:
@typescript-eslint/non-nullable-type-assertion-style Prefers a non-null assertion over explicit type cast when possible :wrench: :thought_balloon:
@typescript-eslint/prefer-as-const Prefer usage of as const over literal type :heavy_check_mark: :wrench:
@typescript-eslint/prefer-enum-initializers Prefer initializing each enums member value
@typescript-eslint/prefer-for-of Prefer a ‘for-of’ loop over a standard ‘for’ loop if the index is only used to access the array being iterated
@typescript-eslint/prefer-function-type Use function types instead of interfaces with call signatures :wrench:
@typescript-eslint/prefer-includes Enforce includes method over indexOf method :wrench: :thought_balloon:
@typescript-eslint/prefer-literal-enum-member Require that all enum members be literal values to prevent unintended enum member name shadow issues
@typescript-eslint/prefer-namespace-keyword Require the use of the namespace keyword instead of the module keyword to declare custom TypeScript modules :heavy_check_mark: :wrench:
@typescript-eslint/prefer-nullish-coalescing Enforce the usage of the nullish coalescing operator instead of logical chaining :thought_balloon:
@typescript-eslint/prefer-optional-chain Prefer using concise optional chain expressions instead of chained logical ands
@typescript-eslint/prefer-readonly Requires that private members are marked as readonly if they’re never modified outside of the constructor :wrench: :thought_balloon:
@typescript-eslint/prefer-readonly-parameter-types Requires that function parameters are typed as readonly to prevent accidental mutation of inputs :thought_balloon:
@typescript-eslint/prefer-reduce-type-parameter Prefer using type parameter when calling Array#reduce instead of casting :wrench: :thought_balloon:
@typescript-eslint/prefer-regexp-exec Enforce that RegExp#exec is used instead of String#match if no global flag is provided :heavy_check_mark: :thought_balloon:
@typescript-eslint/prefer-string-starts-ends-with Enforce the use of String#startsWith and String#endsWith instead of other equivalent methods of checking substrings :wrench: :thought_balloon:
@typescript-eslint/prefer-ts-expect-error Recommends using @ts-expect-error over @ts-ignore :wrench:
@typescript-eslint/promise-function-async Requires any function or method that returns a Promise to be marked async :wrench: :thought_balloon:
@typescript-eslint/require-array-sort-compare Requires Array#sort calls to always provide a compareFunction :thought_balloon:
@typescript-eslint/restrict-plus-operands When adding two variables, operands must both be of type number or of type string :heavy_check_mark: :thought_balloon:
@typescript-eslint/restrict-template-expressions Enforce template literal expressions to be of string type :heavy_check_mark: :thought_balloon:
@typescript-eslint/strict-boolean-expressions Restricts the types allowed in boolean expressions :thought_balloon:
@typescript-eslint/switch-exhaustiveness-check Exhaustiveness checking in switch with union type :thought_balloon:
@typescript-eslint/triple-slash-reference Sets preference level for triple slash directives versus ES6-style import declarations :heavy_check_mark:
@typescript-eslint/type-annotation-spacing Require consistent spacing around type annotations :wrench:
@typescript-eslint/typedef Requires type annotations to exist
@typescript-eslint/unbound-method Enforces unbound methods are called with their expected scope :heavy_check_mark: :thought_balloon:
@typescript-eslint/unified-signatures Warns for any two overloads that could be unified into one by using a union or an optional/rest parameter

Extension Rules

In some cases, ESLint provides a rule itself, but it doesn’t support TypeScript syntax; either it crashes, or it ignores the syntax, or it falsely reports against it. In these cases, we create what we call an extension rule; a rule within our plugin that has the same functionality, but also supports TypeScript.

Key: :heavy_check_mark: = recommended, :wrench: = fixable, :thought_balloon: = requires type information

Name Description :heavy_check_mark: :wrench: :thought_balloon:
@typescript-eslint/brace-style Enforce consistent brace style for blocks :wrench:
@typescript-eslint/comma-dangle Require or disallow trailing comma :wrench:
@typescript-eslint/comma-spacing Enforces consistent spacing before and after commas :wrench:
@typescript-eslint/default-param-last Enforce default parameters to be last
@typescript-eslint/dot-notation enforce dot notation whenever possible :wrench: :thought_balloon:
@typescript-eslint/func-call-spacing Require or disallow spacing between function identifiers and their invocations :wrench:
@typescript-eslint/indent Enforce consistent indentation :wrench:
@typescript-eslint/init-declarations require or disallow initialization in variable declarations
@typescript-eslint/keyword-spacing Enforce consistent spacing before and after keywords :wrench:
@typescript-eslint/lines-between-class-members Require or disallow an empty line between class members :wrench:
@typescript-eslint/no-array-constructor Disallow generic Array constructors :heavy_check_mark: :wrench:
@typescript-eslint/no-dupe-class-members Disallow duplicate class members
@typescript-eslint/no-duplicate-imports Disallow duplicate imports
@typescript-eslint/no-empty-function Disallow empty functions :heavy_check_mark:
@typescript-eslint/no-extra-parens Disallow unnecessary parentheses :wrench:
@typescript-eslint/no-extra-semi Disallow unnecessary semicolons :heavy_check_mark: :wrench:
@typescript-eslint/no-implied-eval Disallow the use of eval()-like methods :heavy_check_mark: :thought_balloon:
@typescript-eslint/no-invalid-this Disallow this keywords outside of classes or class-like objects
@typescript-eslint/no-loop-func Disallow function declarations that contain unsafe references inside loop statements
@typescript-eslint/no-loss-of-precision Disallow literal numbers that lose precision
@typescript-eslint/no-magic-numbers Disallow magic numbers
@typescript-eslint/no-redeclare Disallow variable redeclaration
@typescript-eslint/no-shadow Disallow variable declarations from shadowing variables declared in the outer scope
@typescript-eslint/no-throw-literal Disallow throwing literals as exceptions :thought_balloon:
@typescript-eslint/no-unused-expressions Disallow unused expressions
@typescript-eslint/no-unused-vars Disallow unused variables :heavy_check_mark:
@typescript-eslint/no-use-before-define Disallow the use of variables before they are defined
@typescript-eslint/no-useless-constructor Disallow unnecessary constructors
@typescript-eslint/quotes Enforce the consistent use of either backticks, double, or single quotes :wrench:
@typescript-eslint/require-await Disallow async functions which have no await expression :heavy_check_mark: :thought_balloon:
@typescript-eslint/return-await Enforces consistent returning of awaited values :wrench: :thought_balloon:
@typescript-eslint/semi Require or disallow semicolons instead of ASI :wrench:
@typescript-eslint/space-before-function-paren Enforces consistent spacing before function parenthesis :wrench:
@typescript-eslint/space-infix-ops This rule is aimed at ensuring there are spaces around infix operators. :wrench:

Contributing

See the contributing guide here.



micromatch Donate NPM version NPM monthly downloads NPM total downloads Linux Build Status

Glob matching for javascript/node.js. A replacement and faster alternative to minimatch and multimatch.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Table of Contents

Details

Install

Install with npm:

Quickstart

The main export takes a list of strings and one or more glob patterns:

Use .isMatch() to for boolean matching:

Switching from minimatch and multimatch is easy!


Why use micromatch?

micromatch is a replacement for minimatch and multimatch

  • More complete support for the Bash 4.3 specification than minimatch and multimatch. Micromatch passes all of the spec tests from bash, including some that bash still fails.
  • Fast & Performant - Loads in about 5ms and performs fast matches.
  • Glob matching - Using wildcards (* and ?), globstars (**) for nested directories
  • Accurate - Covers more scenarios than minimatch
  • Well tested - More than 5,000 test assertions
  • Windows support - More reliable windows support than minimatch and multimatch.

Matching features

  • Wildcards (**, *.js)
  • Negation ('!a/*.js', '*!(b).js'])
  • extglobs (+(x|y), !(a|b))
  • POSIX character classes ([[:alpha:][:digit:]])
  • brace expansion (foo/{1..5}.md, bar/{a,b,c}.js)
  • regex character classes (foo-[1-5].js)
  • regex logical “or” (foo/(abc|xyz).js)

You can mix and match these features to create whatever patterns you need!

Switching to micromatch

(There is one notable difference between micromatch and minimatch in regards to how backslashes are handled. See the notes about backslashes for more information.)

From minimatch

Use micromatch.isMatch() instead of minimatch():

Use micromatch.match() instead of minimatch.match():

From multimatch

Same signature:

API

Params

  • {String|Array}: list List of strings to match.
  • {String|Array}: patterns One or more glob patterns to use for matching.
  • {Object}: options See available options
  • returns {Array}: Returns an array of matches

Example

.matcher

Returns a matcher function from the given glob pattern and options. The returned function takes a string to match as its only argument and returns true if the string is a match.

Params

  • pattern {String}: Glob pattern
  • options {Object}
  • returns {Function}: Returns a matcher function.

Example

.isMatch

Returns true if any of the given glob patterns match the specified string.

Params

  • {String}: str The string to test.
  • {String|Array}: patterns One or more glob patterns to use for matching.
  • {Object}: See available options.
  • returns {Boolean}: Returns true if any patterns match str

Example

.not

Returns a list of strings that do not match any of the given patterns.

Params

  • list {Array}: Array of strings to match.
  • patterns {String|Array}: One or more glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of strings that do not match the given patterns.

Example

.contains

Returns true if the given string contains the given pattern. Similar to .isMatch but the pattern can match any part of the string.

Params

  • str {String}: The string to match.
  • patterns {String|Array}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if the patter matches any part of str.

Example

.matchKeys

Filter the keys of the given object with the given glob pattern and options. Does not attempt to match nested keys. If you need this feature, use glob-object instead.

Params

  • object {Object}: The object with keys to filter.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Object}: Returns an object with only keys that match the given patterns.

Example

.some

Returns true if some of the strings in the given list match any of the given glob patterns.

Params

  • list {String|Array}: The string or array of strings to test. Returns as soon as the first match is found.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.every

Returns true if every string in the given list matches any of the given glob patterns.

Params

  • list {String|Array}: The string or array of strings to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.all

Returns true if all of the given patterns match the specified string.

Params

  • str {String|Array}: The string to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.capture

Returns an array of matches captured by pattern in string, ornull` if the pattern did not match.

Params

  • glob {String}: Glob pattern to use for matching.
  • input {String}: String to match
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns an array of captures if the input matches the glob pattern, otherwise null.

Example

.makeRe

Create a regular expression from the given glob pattern.

Params

  • pattern {String}: A glob pattern to convert to regex.
  • options {Object}
  • returns {RegExp}: Returns a regex created from the given pattern.

Example

.scan

Scan a glob pattern to separate the pattern into segments. Used by the split method.

Params

  • pattern {String}
  • options {Object}
  • returns {Object}: Returns an object with

Example

.parse

Parse a glob pattern to create the source string for a regular expression.

Params

  • glob {String}
  • options {Object}
  • returns {Object}: Returns an object with useful properties and output to be used as regex source string.

Example

.braces

Process the given brace pattern.

Params

  • pattern {String}: String with brace pattern to process.
  • options {Object}: Any options to change how expansion is performed. See the braces library for all available options.
  • returns {Array}

Example

Options

Option Type Default value Description
basename boolean false If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.
bash boolean false Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (**).
capture boolean undefined Return regex matches in supporting methods.
contains boolean undefined Allows glob to match any part of the given string(s).
cwd string process.cwd() Current working directory. Used by picomatch.split()
debug boolean undefined Debug regular expressions when an error is thrown.
dot boolean false Match dotfiles. Otherwise dotfiles are ignored unless a . is explicitly defined in the pattern.
expandRange function undefined Custom function for expanding ranges in brace patterns, such as {a..z}. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It’s recommended that returned strings be wrapped in parentheses. This option is overridden by the expandBrace option.
failglob boolean false Similar to the failglob behavior in Bash, throws an error when no matches are found. Based on the bash option of the same name.
fastpaths boolean true To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to false.
flags boolean undefined Regex flags to use in the generated regex. If defined, the nocase option will be overridden.
format function undefined Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc.
ignore array\|string undefined One or more glob patterns for excluding strings that should not be matched from the result.
keepQuotes boolean false Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes.
literalBrackets boolean undefined When true, brackets in the glob pattern will be escaped so that only literal brackets will be matched.
lookbehinds boolean true
matchBase boolean false Alias for basename
maxLength boolean 65536 Limit the max length of the input string. An error is thrown if the input string is longer than this value.
nobrace boolean false Disable brace matching, so that {a,b} and {1..3} would be treated as literal characters.
nobracket boolean undefined Disable matching with regex brackets.
nocase boolean false Perform case-insensitive matching. Equivalent to the regex i flag. Note that this option is ignored when the flags option is defined.
nodupes boolean true Deprecated, use nounique instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false.
noext boolean false Alias for noextglob
noextglob boolean false Disable support for matching with extglobs (like +(a\|b))
noglobstar boolean false Disable support for matching nested directories with globstars (**)
nonegate boolean false Disable support for negating with leading !
noquantifiers boolean false Disable support for regex quantifiers (like a{1,2}) and treat them as brace patterns to be expanded.
onIgnore function undefined Function to be called on ignored items.
onMatch function undefined Function to be called on matched items.
onResult function undefined Function to be called on all items, regardless of whether or not they are matched or ignored.
posix boolean false
posixSlashes boolean undefined Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself
prepend boolean undefined String to prepend to the generated regex used for matching.
regex boolean false Use regular expression rules for + (instead of matching literal +), and for stars that follow closing parentheses or brackets (as in )* and ]*).
strictBrackets boolean undefined Throw an error if brackets, braces, or parens are imbalanced.
strictSlashes boolean undefined When true, picomatch won’t match trailing slashes with single stars.
unescape boolean undefined Remove preceding backslashes from escaped glob characters before creating the regular expression to perform matches.
unixify boolean undefined Alias for posixSlashes, for backwards compatitibility.

Options Examples

options.basename

Allow glob patterns without slashes to match a file path based on its basename. Same behavior as minimatch option matchBase.

Type: Boolean

Default: false

Example

options.bash

Enabled by default, this option enforces bash-like behavior with stars immediately following a bracket expression. Bash bracket expressions are similar to regex character classes, but unlike regex, a star following a bracket expression does not repeat the bracketed characters. Instead, the star is treated the same as any other star.

Type: Boolean

Default: true

Example

options.expandRange

Type: function

Default: undefined

Custom function for expanding ranges in brace patterns. The fill-range library is ideal for this purpose, or you can use custom code to do whatever you need.

Example

The following example shows how to create a glob that matches a numeric folder name between 01 and 25, with leading zeros.

options.format

Type: function

Default: undefined

Custom function for formatting strings before they’re matched.

Example

options.ignore

String or array of glob patterns to match files to ignore.

Type: String|Array

Default: undefined

options.matchBase

Alias for options.basename.

options.noextglob

Disable extglob support, so that extglobs are regarded as literal characters.

Type: Boolean

Default: undefined

Examples

options.nonegate

Disallow negation (!) patterns, and treat leading ! as a literal character to match.

Type: Boolean

Default: undefined

options.noglobstar

Disable matching with globstars (**).

Type: Boolean

Default: undefined

options.nonull

Alias for options.nullglob.

options.nullglob

If true, when no matches are found the actual (arrayified) glob pattern is returned instead of an empty array. Same behavior as minimatch option nonull.

Type: Boolean

Default: undefined

options.onIgnore

options.onMatch

options.onResult

options.posixSlashes

Convert path separators on returned files to posix/unix-style forward slashes. Aliased as unixify for backwards compatibility.

Type: Boolean

Default: true on windows, false everywhere else.

Example

options.unescape

Remove backslashes from escaped glob characters before creating the regular expression to perform matches.

Type: Boolean

Default: undefined

Example

In this example we want to match a literal *:



Extended globbing

Micromatch supports the following extended globbing features.

Extglobs

Extended globbing, as described by the bash man page:

pattern regex equivalent description
?(pattern) (pattern)? Matches zero or one occurrence of the given patterns
*(pattern) (pattern)* Matches zero or more occurrences of the given patterns
+(pattern) (pattern)+ Matches one or more occurrences of the given patterns
@(pattern) (pattern) * Matches one of the given patterns
!(pattern) N/A (equivalent regex is much more complicated) Matches anything except one of the given patterns

* Note that @ isn’t a regex character.

Braces

Brace patterns can be used to match specific ranges or sets of characters.

Example

The pattern {f,b}*/{1..3}/{b,q}* would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux

Visit braces to see the full range of features and options related to brace expansion, or to create brace matching or expansion related issues.

Regex character classes

Given the list: ['a.js', 'b.js', 'c.js', 'd.js', 'E.js']:

  • [ac].js: matches both a and c, returning ['a.js', 'c.js']
  • [b-d].js: matches from b to d, returning ['b.js', 'c.js', 'd.js']
  • [b-d].js: matches from b to d, returning ['b.js', 'c.js', 'd.js']
  • a/[A-Z].js: matches and uppercase letter, returning ['a/E.md']

Learn about regex character classes.

Regex groups

Given ['a.js', 'b.js', 'c.js', 'd.js', 'E.js']:

  • (a|c).js: would match either a or c, returning ['a.js', 'c.js']
  • (b|d).js: would match either b or d, returning ['b.js', 'd.js']
  • (b|[A-Z]).js: would match either b or an uppercase letter, returning ['b.js', 'E.js']

As with regex, parens can be nested, so patterns like ((a|b)|c)/b will work. Although brace expansion might be friendlier to use, depending on preference.

POSIX bracket expressions

POSIX brackets are intended to be more user-friendly than regex character classes. This of course is in the eye of the beholder.

Example


Notes

Bash 4.3 parity

Whenever possible matching behavior is based on behavior Bash 4.3, which is mostly consistent with minimatch.

However, it’s suprising how many edge cases and rabbit holes there are with glob matching, and since there is no real glob specification, and micromatch is more accurate than both Bash and minimatch, there are cases where best-guesses were made for behavior. In a few cases where Bash had no answers, we used wildmatch (used by git) as a fallback.

Backslashes

There is an important, notable difference between minimatch and micromatch in regards to how backslashes are handled in glob patterns.

  • Micromatch exclusively and explicitly reserves backslashes for escaping characters in a glob pattern, even on windows, which is consistent with bash behavior. More importantly, unescaping globs can result in unsafe regular expressions.
  • Minimatch converts all backslashes to forward slashes, which means you can’t use backslashes to escape any characters in your glob patterns.

We made this decision for micromatch for a couple of reasons:

  • Consistency with bash conventions.
  • Glob patterns are not filepaths. They are a type of regular language that is converted to a JavaScript regular expression. Thus, when forward slashes are defined in a glob pattern, the resulting regular expression will match windows or POSIX path separators just fine.

A note about joining paths to globs

Note that when you pass something like path.join('foo', '*') to micromatch, you are creating a filepath and expecting it to still work as a glob pattern. This causes problems on windows, since the path.sep is \\.

In other words, since \\ is reserved as an escape character in globs, on windows path.join('foo', '*') would result in foo\\*, which tells micromatch to match * as a literal character. This is the same behavior as bash.

To solve this, you might be inspired to do something like 'foo\\*'.replace(/\\/g, '/'), but this causes another, potentially much more serious, problem.

Benchmarks

Running benchmarks

Install dependencies for running benchmarks:

Run the benchmarks:

Latest results

As of April 10, 2019 (longer bars are better):

# .makeRe star
  micromatch x 1,724,735 ops/sec ±1.69% (87 runs sampled))
  minimatch x 649,565 ops/sec ±1.93% (91 runs sampled)

# .makeRe star; dot=true
  micromatch x 1,302,127 ops/sec ±1.43% (92 runs sampled)
  minimatch x 556,242 ops/sec ±0.71% (86 runs sampled)

# .makeRe globstar
  micromatch x 1,393,992 ops/sec ±0.71% (89 runs sampled)
  minimatch x 1,112,801 ops/sec ±2.02% (91 runs sampled)

# .makeRe globstars
  micromatch x 1,419,097 ops/sec ±0.34% (94 runs sampled)
  minimatch x 541,207 ops/sec ±1.66% (93 runs sampled)

# .makeRe with leading star
  micromatch x 1,247,825 ops/sec ±0.97% (94 runs sampled)
  minimatch x 489,660 ops/sec ±0.63% (94 runs sampled)

# .makeRe - braces
  micromatch x 206,301 ops/sec ±1.62% (81 runs sampled))
  minimatch x 115,986 ops/sec ±0.59% (94 runs sampled)

# .makeRe braces - range (expanded)
  micromatch x 27,782 ops/sec ±0.79% (88 runs sampled)
  minimatch x 4,683 ops/sec ±1.20% (92 runs sampled)

# .makeRe braces - range (compiled)
  micromatch x 134,056 ops/sec ±2.73% (77 runs sampled))
  minimatch x 977 ops/sec ±0.85% (91 runs sampled)d)

# .makeRe braces - nested ranges (expanded)
  micromatch x 18,353 ops/sec ±0.95% (91 runs sampled)
  minimatch x 4,514 ops/sec ±1.04% (93 runs sampled)

# .makeRe braces - nested ranges (compiled)
  micromatch x 38,916 ops/sec ±1.85% (82 runs sampled)
  minimatch x 980 ops/sec ±0.54% (93 runs sampled)d)

# .makeRe braces - set (compiled)
  micromatch x 141,088 ops/sec ±1.70% (70 runs sampled))
  minimatch x 43,385 ops/sec ±0.87% (93 runs sampled)

# .makeRe braces - nested sets (compiled)
  micromatch x 87,272 ops/sec ±2.85% (71 runs sampled))
  minimatch x 25,327 ops/sec ±1.59% (86 runs sampled)

Contributing

All contributions are welcome! Please read the contributing guide to get started.

Bug reports

Please create an issue if you encounter a bug or matching behavior that doesn’t seem correct. If you find a matching-related issue, please:

  • research existing issues first (open and closed)
  • visit the minimatch documentation to cross-check expected behavior in node.js
  • if all else fails, since there is no real specification for globs we will probably need to discuss expected behavior and decide how to resolve it. which means any detail you can provide to help with this discussion would be greatly appreciated.

Platform issues

It’s important to us that micromatch work consistently on all platforms. If you encounter any platform-specific matching or path related issues, please let us know (pull requests are also greatly appreciated).

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • braces: Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support… more | homepage
  • expand-brackets: Expand POSIX bracket expressions (character classes) in glob patterns. | homepage
  • extglob: Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • nanomatch: Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash… more | homepage
Commits Contributor
475 jonschlinkert
12 es128
8 doowb
3 paulmillr
2 TrySound
2 MartinKolarik
2 Tvrqvoise
2 tunnckoCore
1 amilajack
1 mrmlnc
1 devongovett
1 DianeLooney
1 UltCombo
1 tomByrer
1 fidian
1 simlu
1 wtgtybhertgeghgtwtg

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.8.0, on April 10, 2019. # micromatch NPM version NPM monthly downloads NPM total downloads Linux Build Status Windows Build Status

Glob matching for javascript/node.js. A drop-in replacement and faster alternative to minimatch and multimatch.

Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.

Table of Contents

Details

Install

Install with npm:

Quickstart

The main export takes a list of strings and one or more glob patterns:

Use .isMatch() to get true/false:

Switching from minimatch and multimatch is easy!

Why use micromatch?

micromatch is a drop-in replacement for minimatch and multimatch

  • Micromatch uses snapdragon for parsing and compiling globs, which provides granular control over the entire conversion process in a way that is easy to understand, reason about, and maintain.
  • More consistently accurate matching than minimatch, with more than 36,000 test assertions to prove it.
  • More complete support for the Bash 4.3 specification than minimatch and multimatch. In fact, micromatch passes all of the spec tests from bash, including some that bash still fails.
  • Faster matching, from a combination of optimized glob patterns, faster algorithms, and regex caching.
  • More reliable windows support than minimatch and multimatch.

Matching features

  • Wildcards (**, *.js)
  • Negation ('!a/*.js', '*!(b).js'])
  • extglobs (+(x|y), !(a|b))
  • POSIX character classes ([[:alpha:][:digit:]])
  • brace expansion (foo/{1..5}.md, bar/{a,b,c}.js)
  • regex character classes (foo-[1-5].js)
  • regex logical “or” (foo/(abc|xyz).js)

You can mix and match these features to create whatever patterns you need!

Switching to micromatch

There is one notable difference between micromatch and minimatch in regards to how backslashes are handled. See the notes about backslashes for more information.

From minimatch

Use mm.isMatch() instead of minimatch():

Use mm.match() instead of minimatch.match():

From multimatch

Same signature:

API

micromatch

The main function takes a list of strings and one or more glob patterns to use for matching.

Params

  • list {Array}: A list of strings to match
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of matches

Example

.match

Similar to the main function, but pattern must be a string.

Params

  • list {Array}: Array of strings to match
  • pattern {String}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of matches

Example

.isMatch

Returns true if the specified string matches the given glob pattern.

Params

  • string {String}: String to match
  • pattern {String}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if the string matches the glob pattern.

Example

.some

Returns true if some of the strings in the given list match any of the given glob patterns.

Params

  • list {String|Array}: The string or array of strings to test. Returns as soon as the first match is found.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.every

Returns true if every string in the given list matches any of the given glob patterns.

Params

  • list {String|Array}: The string or array of strings to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.any

Returns true if any of the given glob patterns match the specified string.

Params

  • str {String|Array}: The string to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.all

Returns true if all of the given patterns match the specified string.

Params

  • str {String|Array}: The string to test.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if any patterns match str

Example

.not

Returns a list of strings that do not match any of the given patterns.

Params

  • list {Array}: Array of strings to match.
  • patterns {String|Array}: One or more glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Array}: Returns an array of strings that do not match the given patterns.

Example

.contains

Returns true if the given string contains the given pattern. Similar to .isMatch but the pattern can match any part of the string.

Params

  • str {String}: The string to match.
  • patterns {String|Array}: Glob pattern to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns true if the patter matches any part of str.

Example

.matchKeys

Filter the keys of the given object with the given glob pattern and options. Does not attempt to match nested keys. If you need this feature, use glob-object instead.

Params

  • object {Object}: The object with keys to filter.
  • patterns {String|Array}: One or more glob patterns to use for matching.
  • options {Object}: See available options for changing how matches are performed
  • returns {Object}: Returns an object with only keys that match the given patterns.

Example

.matcher

Returns a memoized matcher function from the given glob pattern and options. The returned function takes a string to match as its only argument and returns true if the string is a match.

Params

  • pattern {String}: Glob pattern
  • options {Object}: See available options for changing how matches are performed.
  • returns {Function}: Returns a matcher function.

Example

.capture

Returns an array of matches captured by pattern in string, ornull` if the pattern did not match.

Params

  • pattern {String}: Glob pattern to use for matching.
  • string {String}: String to match
  • options {Object}: See available options for changing how matches are performed
  • returns {Boolean}: Returns an array of captures if the string matches the glob pattern, otherwise null.

Example

.makeRe

Create a regular expression from the given glob pattern.

Params

  • pattern {String}: A glob pattern to convert to regex.
  • options {Object}: See available options for changing how matches are performed.
  • returns {RegExp}: Returns a regex created from the given pattern.

Example

.braces

Expand the given brace pattern.

Params

  • pattern {String}: String with brace pattern to expand.
  • options {Object}: Any options to change how expansion is performed. See the braces library for all available options.
  • returns {Array}

Example

.create

Parses the given glob pattern and returns an array of abstract syntax trees (ASTs), with the compiled output and optional source map on each AST.

Params

  • pattern {String}: Glob pattern to parse and compile.
  • options {Object}: Any options to change how parsing and compiling is performed.
  • returns {Object}: Returns an object with the parsed AST, compiled string and optional source map.

Example

.parse

Parse the given str with the given options.

Params

  • str {String}
  • options {Object}
  • returns {Object}: Returns an AST

Example

.compile

Compile the given ast or string with the given options.

Params

  • ast {Object|String}
  • options {Object}
  • returns {Object}: Returns an object that has an output property with the compiled string.

Example

.clearCache

Clear the regex cache.

Example

Options

options.basename

Allow glob patterns without slashes to match a file path based on its basename. Same behavior as minimatch option matchBase.

Type: Boolean

Default: false

Example

options.bash

Enabled by default, this option enforces bash-like behavior with stars immediately following a bracket expression. Bash bracket expressions are similar to regex character classes, but unlike regex, a star following a bracket expression does not repeat the bracketed characters. Instead, the star is treated the same as an other star.

Type: Boolean

Default: true

Example

options.cache

Disable regex and function memoization.

Type: Boolean

Default: undefined

options.dot

Match dotfiles. Same behavior as minimatch option dot.

Type: Boolean

Default: false

options.failglob

Similar to the --failglob behavior in Bash, throws an error when no matches are found.

Type: Boolean

Default: undefined

options.ignore

String or array of glob patterns to match files to ignore.

Type: String|Array

Default: undefined

options.matchBase

Alias for options.basename.

options.nobrace

Disable expansion of brace patterns. Same behavior as minimatch option nobrace.

Type: Boolean

Default: undefined

See braces for more information about extended brace expansion.

options.nocase

Use a case-insensitive regex for matching files. Same behavior as minimatch.

Type: Boolean

Default: undefined

options.nodupes

Remove duplicate elements from the result array.

Type: Boolean

Default: undefined

Example

Example of using the unescape and nodupes options together:

options.noext

Disable extglob support, so that extglobs are regarded as literal characters.

Type: Boolean

Default: undefined

Examples

options.nonegate

Disallow negation (!) patterns, and treat leading ! as a literal character to match.

Type: Boolean

Default: undefined

options.noglobstar

Disable matching with globstars (**).

Type: Boolean

Default: undefined

options.nonull

Alias for options.nullglob.

options.nullglob

If true, when no matches are found the actual (arrayified) glob pattern is returned instead of an empty array. Same behavior as minimatch option nonull.

Type: Boolean

Default: undefined

options.snapdragon

Pass your own instance of snapdragon, to customize parsers or compilers.

Type: Object

Default: undefined

options.sourcemap

Generate a source map by enabling the sourcemap option with the .parse, .compile, or .create methods.

(Note that sourcemaps are currently not enabled for brace patterns)

Examples

options.unescape

Remove backslashes from returned matches.

Type: Boolean

Default: undefined

Example

In this example we want to match a literal *:

options.unixify

Convert path separators on returned files to posix/unix-style forward slashes.

Type: Boolean

Default: true on windows, false everywhere else

Example

Extended globbing

Micromatch also supports extended globbing features.

extglobs

Extended globbing, as described by the bash man page:

pattern regex equivalent description
?(pattern) (pattern)? Matches zero or one occurrence of the given patterns
*(pattern) (pattern)* Matches zero or more occurrences of the given patterns
+(pattern) (pattern)+ Matches one or more occurrences of the given patterns
@(pattern) (pattern) * Matches one of the given patterns
!(pattern) N/A (equivalent regex is much more complicated) Matches anything except one of the given patterns

* Note that @ isn’t a RegEx character.

Powered by extglob. Visit that library for the full range of options or to report extglob related issues.

braces

Brace patterns can be used to match specific ranges or sets of characters. For example, the pattern */{1..3}/* would match any of following strings:

foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux

Visit braces to see the full range of features and options related to brace expansion, or to create brace matching or expansion related issues.

regex character classes

Given the list: ['a.js', 'b.js', 'c.js', 'd.js', 'E.js']:

  • [ac].js: matches both a and c, returning ['a.js', 'c.js']
  • [b-d].js: matches from b to d, returning ['b.js', 'c.js', 'd.js']
  • [b-d].js: matches from b to d, returning ['b.js', 'c.js', 'd.js']
  • a/[A-Z].js: matches and uppercase letter, returning ['a/E.md']

Learn about regex character classes.

regex groups

Given ['a.js', 'b.js', 'c.js', 'd.js', 'E.js']:

  • (a|c).js: would match either a or c, returning ['a.js', 'c.js']
  • (b|d).js: would match either b or d, returning ['b.js', 'd.js']
  • (b|[A-Z]).js: would match either b or an uppercase letter, returning ['b.js', 'E.js']

As with regex, parens can be nested, so patterns like ((a|b)|c)/b will work. Although brace expansion might be friendlier to use, depending on preference.

POSIX bracket expressions

POSIX brackets are intended to be more user-friendly than regex character classes. This of course is in the eye of the beholder.

Example

See expand-brackets for more information about bracket expressions.


Notes

Bash 4.3 parity

Whenever possible matching behavior is based on behavior Bash 4.3, which is mostly consistent with minimatch.

However, it’s suprising how many edge cases and rabbit holes there are with glob matching, and since there is no real glob specification, and micromatch is more accurate than both Bash and minimatch, there are cases where best-guesses were made for behavior. In a few cases where Bash had no answers, we used wildmatch (used by git) as a fallback.

Backslashes

There is an important, notable difference between minimatch and micromatch in regards to how backslashes are handled in glob patterns.

  • Micromatch exclusively and explicitly reserves backslashes for escaping characters in a glob pattern, even on windows. This is consistent with bash behavior.
  • Minimatch converts all backslashes to forward slashes, which means you can’t use backslashes to escape any characters in your glob patterns.

We made this decision for micromatch for a couple of reasons:

  • consistency with bash conventions.
  • glob patterns are not filepaths. They are a type of regular language that is converted to a JavaScript regular expression. Thus, when forward slashes are defined in a glob pattern, the resulting regular expression will match windows or POSIX path separators just fine.

A note about joining paths to globs

Note that when you pass something like path.join('foo', '*') to micromatch, you are creating a filepath and expecting it to still work as a glob pattern. This causes problems on windows, since the path.sep is \\.

In other words, since \\ is reserved as an escape character in globs, on windows path.join('foo', '*') would result in foo\\*, which tells micromatch to match * as a literal character. This is the same behavior as bash.

Contributing

All contributions are welcome! Please read the contributing guide to get started.

Bug reports

Please create an issue if you encounter a bug or matching behavior that doesn’t seem correct. If you find a matching-related issue, please:

  • research existing issues first (open and closed)
  • visit the minimatch documentation to cross-check expected behavior in node.js
  • if all else fails, since there is no real specification for globs we will probably need to discuss expected behavior and decide how to resolve it. which means any detail you can provide to help with this discussion would be greatly appreciated.

Platform issues

It’s important to us that micromatch work consistently on all platforms. If you encounter any platform-specific matching or path related issues, please let us know (pull requests are also greatly appreciated).

Benchmarks

Running benchmarks

Install dev dependencies:

Latest results

As of February 18, 2018 (longer bars are better):

# braces-globstar-large-list (485691 bytes)
  micromatch ██████████████████████████████████████████████████ (517 ops/sec ±0.49%)
  minimatch  █ (18.92 ops/sec ±0.54%)
  multimatch █ (18.94 ops/sec ±0.62%)

  micromatch is faster by an avg. of 2,733%

# braces-multiple (3362 bytes)
  micromatch ██████████████████████████████████████████████████ (33,625 ops/sec ±0.45%)
  minimatch   (2.92 ops/sec ±3.26%)
  multimatch  (2.90 ops/sec ±2.76%)

  micromatch is faster by an avg. of 1,156,935%

# braces-range (727 bytes)
  micromatch █████████████████████████████████████████████████ (155,220 ops/sec ±0.56%)
  minimatch  ██████ (20,186 ops/sec ±1.27%)
  multimatch ██████ (19,809 ops/sec ±0.60%)

  micromatch is faster by an avg. of 776%

# braces-set (2858 bytes)
  micromatch █████████████████████████████████████████████████ (24,354 ops/sec ±0.92%)
  minimatch  █████ (2,566 ops/sec ±0.56%)
  multimatch ████ (2,431 ops/sec ±1.25%)

  micromatch is faster by an avg. of 975%

# globstar-large-list (485686 bytes)
  micromatch █████████████████████████████████████████████████ (504 ops/sec ±0.45%)
  minimatch  ███ (33.36 ops/sec ±1.08%)
  multimatch ███ (33.19 ops/sec ±1.35%)

  micromatch is faster by an avg. of 1,514%

# globstar-long-list (90647 bytes)
  micromatch ██████████████████████████████████████████████████ (2,694 ops/sec ±1.08%)
  minimatch  ████████████████ (870 ops/sec ±1.09%)
  multimatch ████████████████ (862 ops/sec ±0.84%)

  micromatch is faster by an avg. of 311%

# globstar-short-list (182 bytes)
  micromatch ██████████████████████████████████████████████████ (328,921 ops/sec ±1.06%)
  minimatch  █████████ (64,808 ops/sec ±1.42%)
  multimatch ████████ (57,991 ops/sec ±2.11%)

  micromatch is faster by an avg. of 536%

# no-glob (701 bytes)
  micromatch █████████████████████████████████████████████████ (415,935 ops/sec ±0.36%)
  minimatch  ███████████ (92,730 ops/sec ±1.44%)
  multimatch █████████ (81,958 ops/sec ±2.13%)

  micromatch is faster by an avg. of 476%

# star-basename-long (12339 bytes)
  micromatch █████████████████████████████████████████████████ (7,963 ops/sec ±0.36%)
  minimatch  ███████████████████████████████ (5,072 ops/sec ±0.83%)
  multimatch ███████████████████████████████ (5,028 ops/sec ±0.40%)

  micromatch is faster by an avg. of 158%

# star-basename-short (349 bytes)
  micromatch ██████████████████████████████████████████████████ (269,552 ops/sec ±0.70%)
  minimatch  ██████████████████████ (122,457 ops/sec ±1.39%)
  multimatch ████████████████████ (110,788 ops/sec ±1.99%)

  micromatch is faster by an avg. of 231%

# star-folder-long (19207 bytes)
  micromatch █████████████████████████████████████████████████ (3,806 ops/sec ±0.38%)
  minimatch  ████████████████████████████ (2,204 ops/sec ±0.32%)
  multimatch ██████████████████████████ (2,020 ops/sec ±1.07%)

  micromatch is faster by an avg. of 180%

# star-folder-short (551 bytes)
  micromatch ██████████████████████████████████████████████████ (249,077 ops/sec ±0.40%)
  minimatch  ███████████ (59,431 ops/sec ±1.67%)
  multimatch ███████████ (55,569 ops/sec ±1.43%)

  micromatch is faster by an avg. of 433%

About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Please read the contributing guide for advice on opening issues, pull requests, and coding standards.

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

You might also be interested in these projects:

  • braces: Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support… more | homepage
  • expand-brackets: Expand POSIX bracket expressions (character classes) in glob patterns. | homepage
  • extglob: Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob… more | homepage
  • fill-range: Fill in a range of numbers or letters, optionally passing an increment or step to… more | homepage
  • nanomatch: Fast, minimal glob matcher for node.js. Similar to micromatch, minimatch and multimatch, but complete Bash… more | homepage
Commits Contributor
457 jonschlinkert
12 es128
8 doowb
3 paulmillr
2 TrySound
2 MartinKolarik
2 charlike-old
1 amilajack
1 mrmlnc
1 devongovett
1 DianeLooney
1 UltCombo
1 tomByrer
1 fidian

Author

Jon Schlinkert


This file was generated by verb-generate-readme, v0.6.0, on February 18, 2018. # regexp-tree

Build Status npm version npm downloads

Regular expressions processor in JavaScript

TL;DR: RegExp Tree is a regular expressions processor, which includes parser, traversal, transformer, optimizer, and interpreter APIs.

You can get an overview of the tool in this article.

Table of Contents

Installation

The parser can be installed as an npm module:

npm install -g regexp-tree

You can also try it online using AST Explorer.

Development

  1. Fork https://github.com/DmitrySoshnikov/regexp-tree repo
  2. If there is an actual issue from the issues list you’d like to work on, feel free to assign it yourself, or comment on it to avoid collisions (open a new issue if needed)
  3. Make your changes
  4. Make sure npm test still passes (add new tests if needed)
  5. Submit a PR

The regexp-tree parser is implemented as an automatic LR parser using Syntax tool. The parser module is generated from the regexp grammar, which is based on the regular expressions grammar used in ECMAScript.

For development from the github repository, run build command to generate the parser module, and transpile JS code:

git clone https://github.com/<your-github-account>/regexp-tree.git
cd regexp-tree
npm install
npm run build

NOTE: JS code transpilation is used to support older versions of Node. For faster development cycle you can use npm run watch command, which continuously transpiles JS code.

Usage as a CLI

Note: the CLI is exposed as its own regexp-tree-cli module.

Check the options available from CLI:

regexp-tree-cli --help
Usage: regexp-tree-cli [options]

Options:
   -e, --expression   A regular expression to be parsed
   -l, --loc          Whether to capture AST node locations
   -o, --optimize     Applies optimizer on the passed expression
   -c, --compat       Applies compat-transpiler on the passed expression
   -t, --table        Print NFA/DFA transition tables (nfa/dfa/all)

To parse a regular expression, pass -e option:

regexp-tree-cli -e '/a|b/i'

Which produces an AST node corresponding to this regular expression:

NOTE: the format of a regexp is / Body / OptionalFlags.

Usage from Node

The parser can also be used as a Node module:

Note, regexp-tree supports parsing regexes from strings, and also from actual RegExp objects (in general – from any object which can be coerced to a string). If some feature is not implemented yet in an actual JavaScript RegExp, it should be passed as a string:

Also note, that in string-mode, escaping is done using two slashes \\ per JavaScript:

Capturing locations

For source code transformation tools it might be useful also to capture locations of the AST nodes. From the command line it’s controlled via the -l option:

regexp-tree-cli -e '/ab/' -l

This attaches loc object to each AST node:

From Node it’s controlled via setOptions method exposed on the parser:

The setOptions method sets global options, which are preserved between calls. It is also possible to provide options per a single parse call, which might be more preferred:

Using traversal API

The traverse module allows handling needed AST nodes using the visitor pattern. In Node the module is exposed as the regexpTree.traverse method. Handlers receive an instance of the NodePath class, which encapsulates node itself, its parent node, property, and index (in case the node is part of a collection).

Visiting a node follows this algorithm: - call pre handler. - recurse into node’s children. - call post handler.

For each node type of interest, you can provide either: - a function (pre). - an object with members pre and post.

You can also provide a * handler which will be executed on every node.

Example:

Using transform API

NOTE: you can play with transformation APIs, and write actual transforms for quick tests in AST Explorer. See this example.

While traverse module provides basic traversal API, which can be used for any purposes of AST handling, transform module focuses mainly on transformation of regular expressions.

It accepts a regular expressions in different formats (string, an actual RegExp object, or an AST), applies a set of transformations, and retuns an instance of TransformResult. Handles receive as a parameter the same NodePath object used in traverse.

Example:

Transform plugins

A transformation plugin is a module which exports a transformation handler. We have seen above how we can pass a handler object directly to the regexpTree.transform method, here we extract it into a separate module, so it can be implemented and shared independently:

Example of a plugin:

Once we have this plugin ready, we can require it, and pass to the transform function:

NOTE: we can also pass a list of plugins to the regexpTree.transform. In this case the plugins are applied in one pass in order. Another approach is to run several sequential calls to transform, setting up a pipeline, when a transformed AST is passed further to another plugin, etc.

You can see other examples of transform plugins in the optimizer/transforms or in the compat-transpiler/transforms directories.

Using generator API

The generator module generates regular expressions from corresponding AST nodes. In Node the module is exposed as regexpTree.generate method.

Example:

Using optimizer API

Optimizer transforms your regexp into an optimized version, replacing some sub-expressions with their idiomatic patterns. This might be good for different kinds of minifiers, as well as for regexp machines.

NOTE: the Optimizer is implemented as a set of regexp-tree plugins.

Example:

From CLI the optimizer is available via --optimize (-o) option:

regexp-tree-cli -e '/[a-zA-Z_0-9][A-Z_\da-z]*\e{1,}/' -o

Result:

Optimized: /\w+e+/

See the optimizer README for more details.

Optimizer ESLint plugin

The optimizer module is also available as an ESLint plugin, which can be installed at: eslint-plugin-optimize-regex.

Using compat-transpiler API

The compat-transpiler module translates your regexp in new format or in new syntax, into an equivalent regexp in a legacy representation, so it can be used in engines which don’t yet implement the new syntax.

NOTE: the compat-transpiler is implemented as a set of regexp-tree plugins.

Example, “dotAll” s flag:

Is translated into:

Or named capturing groups:

Becomes:

To use the API from Node:

From CLI the compat-transpiler is available via --compat (-c) option:

regexp-tree-cli -e '/(?<all>.)\k<all>/s' -c

Result:

Compat: /([\0-\uFFFF])\1/

Compat-transpiler Babel plugin

The compat-transpiler module is also available as a Babel plugin, which can be installed at: babel-plugin-transform-modern-regexp.

Note, the plugin also includes extended regexp features.

RegExp extensions

Besides future proposals, like named capturing group, and other which are being currently standardized, regexp-tree also supports non-standard features.

NOTE: “non-standard” means specifically ECMAScript standard, since in other regexp egnines, e.g. PCRE, Python, etc. these features are standard.

One of such featurs is x flag, which enables extended mode of regular expressions. In this mode most of whitespaces are ignored, and expressions can use #-comments.

Example:

/
  # A regular expression for date.

  (?<year>\d{4})-    # year part of a date
  (?<month>\d{2})-   # month part of a date
  (?<day>\d{2})      # day part of a date

/x

This is normally parsed by the regexp-tree parser, and compat-transpiler has full support for it; it’s translated into:

/(\d{4})-(\d{2})-(\d{2})/

RegExp extensions Babel plugin

The regexp extensions are also available as a Babel plugin, which can be installed at: babel-plugin-transform-modern-regexp.

Note, the plugin also includes compat-transpiler features.

Creating RegExp objects

To create an actual RegExp JavaScript object, we can use regexpTree.toRegExp method:

Executing regexes

It is also possible to execute regular expressions using exec API method, which has support for new syntax, and features, such as named capturing group, etc:

Using interpreter API

NOTE: you can read more about implementation details of the interpreter in this series of articles.

In addition to executing regular expressions using JavaScript built-in RegExp engine, RegExp Tree also implements own interpreter based on classic NFA/DFA finite automaton engine.

Currently it aims educational purposes – to trace the regexp matching process, transitioning in NFA/DFA states. It also allows building state transitioning table, which can be used for custom implementation. In API the module is exposed as fa (finite-automaton) object.

Example:

For more granular work with NFA and DFA, fa module also exposes convenient builders, so you can build NFA fragments directly:

Printing NFA/DFA tables

The --table option allows displaying NFA/DFA transition tables. RegExp Tree also applies DFA minimization (using N-equivalence algorithm), and produces the minimal transition table as its final result.

In the example below for the /a|b|c/ regexp, we first obtain the NFA transition table, which is further converted to the original DFA transition table (down from the 10 non-deterministic states to 4 deterministic states), and eventually minimized to the final DFA table (from 4 to only 2 states).

./bin/regexp-tree-cli -e '/a|b|c/' --table all

Result:

> - starting
✓ - accepting

NFA transition table:

┌─────┬───┬───┬────┬─────────────┐
│     │ a │ b │ c  │ ε*          │
├─────┼───┼───┼────┼─────────────┤
│ 1 > │   │   │    │ {1,2,3,7,9} │
├─────┼───┼───┼────┼─────────────┤
│ 2   │   │   │    │ {2,3,7}     │
├─────┼───┼───┼────┼─────────────┤
│ 3   │ 4 │   │    │ 3           │
├─────┼───┼───┼────┼─────────────┤
│ 4   │   │   │    │ {4,5,6}     │
├─────┼───┼───┼────┼─────────────┤
│ 5   │   │   │    │ {5,6}       │
├─────┼───┼───┼────┼─────────────┤
│ 6 ✓ │   │   │    │ 6           │
├─────┼───┼───┼────┼─────────────┤
│ 7   │   │ 8 │    │ 7           │
├─────┼───┼───┼────┼─────────────┤
│ 8   │   │   │    │ {8,5,6}     │
├─────┼───┼───┼────┼─────────────┤
│ 9   │   │   │ 10 │ 9           │
├─────┼───┼───┼────┼─────────────┤
│ 10  │   │   │    │ {10,6}      │
└─────┴───┴───┴────┴─────────────┘


DFA: Original transition table:

┌─────┬───┬───┬───┐
│     │ a │ b │ c │
├─────┼───┼───┼───┤
│ 1 > │ 4 │ 3 │ 2 │
├─────┼───┼───┼───┤
│ 2 ✓ │   │   │   │
├─────┼───┼───┼───┤
│ 3 ✓ │   │   │   │
├─────┼───┼───┼───┤
│ 4 ✓ │   │   │   │
└─────┴───┴───┴───┘


DFA: Minimized transition table:

┌─────┬───┬───┬───┐
│     │ a │ b │ c │
├─────┼───┼───┼───┤
│ 1 > │ 2 │ 2 │ 2 │
├─────┼───┼───┼───┤
│ 2 ✓ │   │   │   │
└─────┴───┴───┴───┘

AST nodes specification

Below are the AST node types for different regular expressions patterns:

Char

A basic building block, single character. Can be escaped, and be of different kinds.

Simple char

Basic non-escaped char in a regexp:

z

Node:

NOTE: to test this from CLI, the char should be in an actual regexp – /z/.

Escaped char
\z

The same value, escaped flag is added:

Escaping is mostly used with meta symbols:

// Syntax error
*
\*

OK, node:

Meta char

A meta character should not be confused with an escaped char.

Example:

\n

Node:

Among other meta character are: ., \f, \r, \n, \t, \v, \0, [\b] (backspace char), \s, \S, \w, \W, \d, \D.

NOTE: Meta characters representing ranges (like ., \s, etc.) have undefined value for symbol and NaN for codePoint.

NOTE: \b and \B are parsed as Assertion node type, not Char.

Control char

A char preceded with \c, e.g. \cx, which stands for CTRL+x:

\cx

Node:

HEX char-code

A char preceded with \x, followed by a HEX-code, e.g. \x3B (symbol ;):

\x3B

Node:

Decimal char-code

Char-code:

\42

Node:

Octal char-code

Char-code started with \0, followed by an octal number:

\073

Node:

Unicode

Unicode char started with \u, followed by a hex number:

\u003B

Node:

When using the u flag, unicode chars can also be represented using \u followed by a hex number between curly braces:

\u{1F680}

Node:

When using the u flag, unicode chars can also be represented using a surrogate pair:

\ud83d\ude80

Node:

Character class

Character classes define a set of characters. A set may include as simple characters, as well as character ranges. A class can be positive (any from the characters in the class match), or negative (any but the characters from the class match).

Positive character class

A positive character class is defined between [ and ] brackets:

[a*]

A node:

NOTE: some meta symbols are treated as normal characters in a character class. E.g. * is not a repetition quantifier, but a simple char.

Negative character class

A negative character class is defined between [^ and ] brackets:

[^ab]

An AST node is the same, just negative property is added:

Character class ranges

As mentioned, a character class may also contain ranges of symbols:

[a-z]

A node:

NOTE: it is a syntax error if to value is less than from value: /[z-a]/.

The range value can be the same for from and to, and the special range - character is treated as a simple character when it stands in a char position:

// from: 'a', to: 'a'
[a-a]

// from: '-', to: '-'
[---]

// simple '-' char:
[-]

// 3 ranges:
[a-zA-Z0-9]+

Unicode properties

Unicode property escapes are a new type of escape sequence available in regular expressions that have the u flag set. With this feature it is possible to write Unicode expressions as:

The AST node for this expression is:

All possible property names, values, and their aliases can be found at the specification.

For General_Category it is possible to use a shorthand:

Binary names use the single value as well:

The capitalized P defines the negation of the expression:

Alternative

An alternative (or concatenation) defines a chain of patterns followed one after another:

abc

A node:

Another examples:

// 'a' with a quantifier, followed by 'b'
a?b

// A group followed by a class:
(ab)[a-z]

Disjunction

The disjunction defines “OR” operation for regexp patterns. It’s a binary operation, having left, and right nodes.

Matches a or b:

a|b

A node:

Groups

The groups play two roles: they define grouping precedence, and allow to capture needed sub-expressions in case of a capturing group.

Capturing group

“Capturing” means the matched string can be referred later by a user, including in the pattern itself – by using backreferences.

Char a, and b are grouped, followed by the c char:

(ab)c

A node:

As we can see, it also tracks the number of the group.

Another example:

// A grouped disjunction of a symbol, and a character class:
(5|[a-z])
Named capturing group

NOTE: Named capturing groups are not yet supported by JavaScript RegExp. It is an ECMAScript proposal which is at stage 3 at the moment.

A capturing group can be given a name using the (?<name>...) syntax, for any identifier name.

For example, a regular expressions for a date:

For the group:

We have the following node (the name property with value foo is added):

Note: The nameRaw property represents the name as parsed from the original source, including escape sequences. The name property represents the canonical decoded form of the name.

For example, given the /u flag and the following group:

(?<\u{03C0}>x)

We would have the following node:

Non-capturing group

Sometimes we don’t need to actually capture the matched string from a group. In this case we can use a non-capturing group:

Char a, and b are grouped, but not captured, followed by the c char:

(?:ab)c

The same node, the capturing flag is false:

Backreferences

A capturing group can be referenced in the pattern using notation of an escaped group number.

Matches abab string:

(ab)\1

A node:

A named capturing group can be accessed using \k<name> pattern, and also using a numbered reference.

Matches www:

A node:

Note: The referenceRaw property represents the reference as parsed from the original source, including escape sequences. The reference property represents the canonical decoded form of the reference.

For example, given the /u flag and the following pattern (matches www):

(?<π>w)\k<\u{03C0}>\1

We would have the following node:

Quantifiers

Quantifiers specify repetition of a regular expression (or of its part). Below are the quantifiers which wrap a parsed expression into a Repetition node. The quantifier itself can be of different kinds, and has Quantifier node type.

? zero-or-one

The ? quantifier is short for {0,1}.

a?

Node:

* zero-or-more

The * quantifier is short for {0,}.

a*

Node:

+ one-or-more

The + quantifier is short for {1,}.

// Same as `aa*`, or `a{1,}`
a+

Node:

Range-based quantifiers

Explicit range-based quantifiers are parsed as follows:

Exact number of matches
a{3}

The type of the quantifier is Range, and from, and to properties have the same value:

Open range

An open range doesn’t have max value (assuming semantic “more”, or Infinity value):

a{3,}

An AST node for such range doesn’t contain to property:

Closed range

A closed range has explicit max value: (which syntactically can be the same as min value):

a{3,5}

// Same as a{3}
a{3,3}

An AST node for a closed range:

NOTE: it is a syntax error if the max value is less than min value: /a{3,2}/

Non-greedy

If any quantifier is followed by the ?, the quantifier becomes non-greedy.

Example:

a+?

Node:

Other examples:

a??
a*?
a{1}?
a{1,}?
a{1,3}?

Assertions

Assertions appear as separate AST nodes, however instread of manipulating on the characters themselves, they assert certain conditions of a matching string. Examples: ^ – beginning of a string (or a line in multiline mode), $ – end of a string, etc.

^ begin marker

The ^ assertion checks whether a scanner is at the beginning of a string (or a line in multiline mode).

In the example below ^ is not a property of the a symbol, but a separate AST node for the assertion. The parsed node is actually an Alternative with two nodes:

^a

The node:

Since assertion is a separate node, it may appear anywhere in the matching string. The following regexp is completely valid, and asserts beginning of the string; it’ll match an empty string:

^^^^^
$ end marker

The $ assertion is similar to ^, but asserts the end of a string (or a line in a multiline mode):

a$

A node:

And again, this is a completely valid regexp, and matches an empty string:

^^^^$$$$$

// valid too:
$^
Boundary assertions

The \b assertion check for word boundary, i.e. the position between a word and a space.

Matches x in x y, but not in xy:

x\b

A node:

The \B is vice-versa checks for non-word boundary. The following example matches x in xy, but not in x y:

x\B

A node is the same:

Lookahead assertions

These assertions check whether a pattern is followed (or not followed for the negative assertion) by another pattern.

Positive lookahead assertion

Matches a only if it’s followed by b:

a(?=b)

A node:

Negative lookahead assertion

Matches a only if it’s not followed by b:

a(?!b)

A node is similar, just negative flag is added:

Lookbehind assertions

NOTE: Lookbehind assertions are not yet supported by JavaScript RegExp. It is an ECMAScript proposal which is at stage 3 at the moment.

These assertions check whether a pattern is preceded (or not preceded for the negative assertion) by another pattern.

Positive lookbehind assertion

Matches b only if it’s preceded by a:

(?<=a)b

A node:

Negative lookbehind assertion

Matches b only if it’s not preceded by a:

(?<!a)b

A node:



Deprecated!

As of Feb 11th 2020, request is fully deprecated. No new changes are expected land. In fact, none have landed for some time.

For more information about why request is deprecated and possible alternatives refer to this issue.



Request - Simplified HTTP client

npm package

Build status Coverage Coverage Dependency Status Known Vulnerabilities Gitter

Super simple to use

Request is designed to be the simplest way possible to make http calls. It supports HTTPS and follows redirects by default.

Table of contents

Request also offers convenience methods like request.defaults and request.post, and there are lots of usage examples and several debugging techniques.


Streaming

You can stream any response to a file stream.

You can also stream a file to a PUT or POST request. This method will also check the file extension against a mapping of file extensions to content-types (in this case application/json) and use the proper content-type in the PUT request (if the headers don’t already provide one).

Request can also pipe to itself. When doing so, content-type and content-length are preserved in the PUT headers.

Request emits a “response” event when a response is received. The response argument will be an instance of http.IncomingMessage.

To easily handle errors when streaming requests, listen to the error event before piping:

Now let’s get fancy.

You can also pipe() from http.ServerRequest instances, as well as to http.ServerResponse instances. The HTTP method, headers, and entity-body data will be sent. Which means that, if you don’t really care about security, you can do:

And since pipe() returns the destination stream in ≥ Node 0.5.x you can do one line proxying. :)

Also, none of this new functionality conflicts with requests previous features, it just expands them.

You can still use intermediate proxies, the requests will still follow HTTP forwards, etc.

back to top


Promises & Async/Await

request supports both streaming and callback interfaces natively. If you’d like request to return a Promise instead, you can use an alternative interface wrapper for request. These wrappers can be useful if you prefer to work with Promises, or if you’d like to use async/await in ES2017.

Several alternative interfaces are provided by the request team, including: - request-promise (uses Bluebird Promises) - request-promise-native (uses native Promises) - request-promise-any (uses any-promise Promises)

Also, util.promisify, which is available from Node.js v8.0 can be used to convert a regular function that takes a callback to return a promise instead.

back to top


Forms

request supports application/x-www-form-urlencoded and multipart/form-data form uploads. For multipart/related refer to the multipart API.

application/x-www-form-urlencoded (URL-Encoded Forms)

URL-encoded forms are simple.

multipart/form-data (Multipart Form Uploads)

For multipart/form-data we use the form-data library by [@felixge](https://github.com/felixge). For the most cases, you can pass your upload form data via the formData option.

For advanced cases, you can access the form-data object itself via r.form(). This can be modified until the request is fired on the next cycle of the event-loop. (Note that this calling form() will clear the currently set form data for that request.)

See the form-data README for more information & examples.

multipart/related

Some variations in different HTTP implementations require a newline/CRLF before, after, or both before and after the boundary of a multipart/related request (using the multipart option). This has been observed in the .NET WebAPI version 4.0. You can turn on a boundary preambleCRLF or postamble by passing them as true to your request options.

back to top


HTTP Authentication

If passed as an option, auth should be a hash containing values:

  • user || username
  • pass || password
  • sendImmediately (optional)
  • bearer (optional)

The method form takes parameters auth(username, password, sendImmediately, bearer).

sendImmediately defaults to true, which causes a basic or bearer authentication header to be sent. If sendImmediately is false, then request will retry with a proper authentication header after receiving a 401 response from the server (which must contain a WWW-Authenticate header indicating the required authentication method).

Note that you can also specify basic authentication using the URL itself, as detailed in RFC 1738. Simply pass the user:password before the host with an @ sign:

Digest authentication is supported, but it only works with sendImmediately set to false; otherwise request will send basic authentication on the initial request, which will probably cause the request to fail.

Bearer authentication is supported, and is activated when the bearer value is available. The value may be either a String or a Function returning a String. Using a function to supply the bearer token is particularly useful if used in conjunction with defaults to allow a single function to supply the last known token at the time of sending a request, or to compute one on the fly.

back to top


Custom HTTP Headers

HTTP Headers, such as User-Agent, can be set in the options object. In the example below, we call the github API to find out the number of stars and forks for the request repository. This requires a custom User-Agent header as well as https.

back to top


OAuth Signing

OAuth version 1.0 is supported. The default signing algorithm is HMAC-SHA1:

// OAuth1.0 - 3-legged server side flow (Twitter example)
// step 1
const qs = require('querystring')
  , oauth =
    { callback: 'http://mysite.com/callback/'
    , consumer_key: CONSUMER_KEY
    , consumer_secret: CONSUMER_SECRET
    }
  , url = 'https://api.twitter.com/oauth/request_token'
  ;
request.post({url:url, oauth:oauth}, function (e, r, body) {
  // Ideally, you would take the body in the response
  // and construct a URL that a user clicks on (like a sign in button).
  // The verifier is only available in the response after a user has
  // verified with twitter that they are authorizing your app.

  // step 2
  const req_data = qs.parse(body)
  const uri = 'https://api.twitter.com/oauth/authenticate'
    + '?' + qs.stringify({oauth_token: req_data.oauth_token})
  // redirect the user to the authorize uri

  // step 3
  // after the user is redirected back to your server
  const auth_data = qs.parse(body)
    , oauth =
      { consumer_key: CONSUMER_KEY
      , consumer_secret: CONSUMER_SECRET
      , token: auth_data.oauth_token
      , token_secret: req_data.oauth_token_secret
      , verifier: auth_data.oauth_verifier
      }
    , url = 'https://api.twitter.com/oauth/access_token'
    ;
  request.post({url:url, oauth:oauth}, function (e, r, body) {
    // ready to make signed requests on behalf of the user
    const perm_data = qs.parse(body)
      , oauth =
        { consumer_key: CONSUMER_KEY
        , consumer_secret: CONSUMER_SECRET
        , token: perm_data.oauth_token
        , token_secret: perm_data.oauth_token_secret
        }
      , url = 'https://api.twitter.com/1.1/users/show.json'
      , qs =
        { screen_name: perm_data.screen_name
        , user_id: perm_data.user_id
        }
      ;
    request.get({url:url, oauth:oauth, qs:qs, json:true}, function (e, r, user) {
      console.log(user)
    })
  })
})

For RSA-SHA1 signing, make the following changes to the OAuth options object: * Pass signature_method : 'RSA-SHA1' * Instead of consumer_secret, specify a private_key string in PEM format

For PLAINTEXT signing, make the following changes to the OAuth options object: * Pass signature_method : 'PLAINTEXT'

To send OAuth parameters via query params or in a post body as described in The Consumer Request Parameters section of the oauth1 spec: * Pass transport_method : 'query' or transport_method : 'body' in the OAuth options object. * transport_method defaults to 'header'

To use Request Body Hash you can either * Manually generate the body hash and pass it as a string body_hash: '...' * Automatically generate the body hash by passing body_hash: true

back to top


Proxies

If you specify a proxy option, then the request (and any subsequent redirects) will be sent via a connection to the proxy server.

If your endpoint is an https url, and you are using a proxy, then request will send a CONNECT request to the proxy server first, and then use the supplied connection to connect to the endpoint.

That is, first it will make a request like:

HTTP/1.1 CONNECT endpoint-server.com:80
Host: proxy-server.com
User-Agent: whatever user agent you specify

and then the proxy server make a TCP connection to endpoint-server on port 80, and return a response that looks like:

HTTP/1.1 200 OK

At this point, the connection is left open, and the client is communicating directly with the endpoint-server.com machine.

See the wikipedia page on HTTP Tunneling for more information.

By default, when proxying http traffic, request will simply make a standard proxied http request. This is done by making the url section of the initial line of the request a fully qualified url to the endpoint.

For example, it will make a single request that looks like:

HTTP/1.1 GET http://endpoint-server.com/some-url
Host: proxy-server.com
Other-Headers: all go here

request body or whatever

Because a pure “http over http” tunnel offers no additional security or other features, it is generally simpler to go with a straightforward HTTP proxy in this case. However, if you would like to force a tunneling proxy, you may set the tunnel option to true.

You can also make a standard proxied http request by explicitly setting tunnel : false, but note that this will allow the proxy to see the traffic to/from the destination server.

If you are using a tunneling proxy, you may set the proxyHeaderWhiteList to share certain headers with the proxy.

You can also set the proxyHeaderExclusiveList to share certain headers only with the proxy and not with destination host.

By default, this set is:

accept
accept-charset
accept-encoding
accept-language
accept-ranges
cache-control
content-encoding
content-language
content-length
content-location
content-md5
content-range
content-type
connection
date
expect
max-forwards
pragma
proxy-authorization
referer
te
transfer-encoding
user-agent
via

Note that, when using a tunneling proxy, the proxy-authorization header and any headers from custom proxyHeaderExclusiveList are never sent to the endpoint server, but only to the proxy server.

Controlling proxy behaviour using environment variables

The following environment variables are respected by request:

  • HTTP_PROXY / http_proxy
  • HTTPS_PROXY / https_proxy
  • NO_PROXY / no_proxy

When HTTP_PROXY / http_proxy are set, they will be used to proxy non-SSL requests that do not have an explicit proxy configuration option present. Similarly, HTTPS_PROXY / https_proxy will be respected for SSL requests that do not have an explicit proxy configuration option. It is valid to define a proxy in one of the environment variables, but then override it for a specific request, using the proxy configuration option. Furthermore, the proxy configuration option can be explicitly set to false / null to opt out of proxying altogether for that request.

request is also aware of the NO_PROXY/no_proxy environment variables. These variables provide a granular way to opt out of proxying, on a per-host basis. It should contain a comma separated list of hosts to opt out of proxying. It is also possible to opt of proxying when a particular destination port is used. Finally, the variable may be set to * to opt out of the implicit proxy configuration of the other environment variables.

Here’s some examples of valid no_proxy values:

  • google.com - don’t proxy HTTP/HTTPS requests to Google.
  • google.com:443 - don’t proxy HTTPS requests to Google, but do proxy HTTP requests to Google.
  • google.com:443, yahoo.com:80 - don’t proxy HTTPS requests to Google, and don’t proxy HTTP requests to Yahoo!
  • * - ignore https_proxy/http_proxy environment variables altogether.

back to top


UNIX Domain Sockets

request supports making requests to UNIX Domain Sockets. To make one, use the following URL scheme:

Note: The SOCKET path is assumed to be absolute to the root of the host file system.

back to top


TLS/SSL Protocol

TLS/SSL Protocol options, such as cert, key and passphrase, can be set directly in options object, in the agentOptions property of the options object, or even in https.globalAgent.options. Keep in mind that, although agentOptions allows for a slightly wider range of configurations, the recommended way is via options object directly, as using agentOptions or https.globalAgent.options would not be applied in the same way in proxied environments (as data travels through a TLS connection instead of an http/https agent).

Using options.agentOptions

In the example below, we call an API that requires client side SSL certificate (in PEM format) with passphrase protected private key (in PEM format) and disable the SSLv3 protocol:

It is able to force using SSLv3 only by specifying secureProtocol:

It is possible to accept other certificates than those signed by generally allowed Certificate Authorities (CAs). This can be useful, for example, when using self-signed certificates. To require a different root certificate, you can specify the signing CA by adding the contents of the CA’s certificate file to the agentOptions. The certificate the domain presents must be signed by the root certificate specified:

The ca value can be an array of certificates, in the event you have a private or internal corporate public-key infrastructure hierarchy. For example, if you want to connect to https://api.some-server.com which presents a key chain consisting of: 1. its own public key, which is signed by: 2. an intermediate “Corp Issuing Server”, that is in turn signed by: 3. a root CA “Corp Root CA”;

you can configure your request as follows:

back to top


The options.har property will override the values: url, method, qs, headers, form, formData, body, json, as well as construct multipart data and read files from disk when request.postData.params[].fileName is present without a matching value.

A validation step will check if the HAR Request format matches the latest spec (v1.2) and will skip parsing if not matching.

back to top


request(options, callback)

The first argument can be either a url or an options object. The only required option is uri; all others are optional.

  • uri || url - fully qualified uri or a parsed url object from url.parse()
  • baseUrl - fully qualified uri string used as the base url. Most useful with request.defaults, for example when you want to do many requests to the same domain. If baseUrl is https://example.com/api/, then requesting /end/point?test=true will fetch https://example.com/api/end/point?test=true. When baseUrl is given, uri must also be a string.
  • method - http method (default: "GET")
  • headers - http headers (default: {})

  • qs - object containing querystring values to be appended to the uri
  • qsParseOptions - object containing options to pass to the qs.parse method. Alternatively pass options to the querystring.parse method using this format {sep:';', eq:':', options:{}}
  • qsStringifyOptions - object containing options to pass to the qs.stringify method. Alternatively pass options to the querystring.stringify method using this format {sep:';', eq:':', options:{}}. For example, to change the way arrays are converted to query strings using the qs module pass the arrayFormat option with one of indices|brackets|repeat
  • useQuerystring - if true, use querystring to stringify and parse querystrings, otherwise use qs (default: false). Set this option to true if you need arrays to be serialized as foo=bar&foo=baz instead of the default foo[0]=bar&foo[1]=baz.

  • body - entity body for PATCH, POST and PUT requests. Must be a Buffer, String or ReadStream. If json is true, then body must be a JSON-serializable object.
  • form - when passed an object or a querystring, this sets body to a querystring representation of value, and adds Content-type: application/x-www-form-urlencoded header. When passed no options, a FormData instance is returned (and is piped to request). See “Forms” section above.
  • formData - data to pass for a multipart/form-data request. See Forms section above.
  • multipart - array of objects which contain their own headers and body attributes. Sends a multipart/related request. See Forms section above.
    • Alternatively you can pass in an object {chunked: false, data: []} where chunked is used to specify whether the request is sent in chunked transfer encoding In non-chunked requests, data items with body streams are not allowed.
  • preambleCRLF - append a newline/CRLF before the boundary of your multipart/form-data request.
  • postambleCRLF - append a newline/CRLF at the end of the boundary of your multipart/form-data request.
  • json - sets body to JSON representation of value and adds Content-type: application/json header. Additionally, parses the response body as JSON.
  • jsonReviver - a reviver function that will be passed to JSON.parse() when parsing a JSON response body.
  • jsonReplacer - a replacer function that will be passed to JSON.stringify() when stringifying a JSON request body.

  • auth - a hash containing values user || username, pass || password, and sendImmediately (optional). See documentation above.
  • oauth - options for OAuth HMAC-SHA1 signing. See documentation above.
  • hawk - options for Hawk signing. The credentials key must contain the necessary signing info, see hawk docs for details.
  • aws - object containing AWS signing information. Should have the properties key, secret, and optionally session (note that this only works for services that require session as part of the canonical string). Also requires the property bucket, unless you’re specifying your bucket as part of the path, or the request doesn’t use a bucket (i.e. GET Services). If you want to use AWS sign version 4 use the parameter sign_version with value 4 otherwise the default is version 2. If you are using SigV4, you can also include a service property that specifies the service name. Note: you need to npm install aws4 first.
  • httpSignature - options for the HTTP Signature Scheme using Joyent’s library. The keyId and key properties must be specified. See the docs for other options.

  • followRedirect - follow HTTP 3xx responses as redirects (default: true). This property can also be implemented as function which gets response object as a single argument and should return true if redirects should continue or false otherwise.
  • followAllRedirects - follow non-GET HTTP 3xx responses as redirects (default: false)
  • followOriginalHttpMethod - by default we redirect to HTTP method GET. you can enable this property to redirect to the original HTTP method (default: false)
  • maxRedirects - the maximum number of redirects to follow (default: 10)
  • removeRefererHeader - removes the referer header when a redirect happens (default: false). Note: if true, referer header set in the initial request is preserved during redirect chain.

  • encoding - encoding to be used on setEncoding of response data. If null, the body is returned as a Buffer. Anything else (including the default value of undefined) will be passed as the encoding parameter to toString() (meaning this is effectively utf8 by default). (Note: if you expect binary data, you should set encoding: null.)
  • gzip - if true, add an Accept-Encoding header to request compressed content encodings from the server (if not already present) and decode supported content encodings in the response. Note: Automatic decoding of the response content is performed on the body data returned through request (both through the request stream and passed to the callback function) but is not performed on the response stream (available from the response event) which is the unmodified http.IncomingMessage object which may contain compressed data. See example below.
  • jar - if true, remember cookies for future use (or define your custom cookie jar; see examples section)

  • agent - http(s).Agent instance to use
  • agentClass - alternatively specify your agent’s class name
  • agentOptions - and pass its options. Note: for HTTPS see tls API doc for TLS/SSL options and the documentation above.
  • forever - set to true to use the forever-agent Note: Defaults to http(s).Agent({keepAlive:true}) in node 0.12+
  • pool - an object describing which agents to use for the request. If this option is omitted the request will use the global agent (as long as your options allow for it). Otherwise, request will search the pool for your custom agent. If no custom agent is found, a new agent will be created and added to the pool. Note: pool is used only when the agent option is not specified.
    • A maxSockets property can also be provided on the pool object to set the max number of sockets for all agents created (ex: pool: {maxSockets: Infinity}).
    • Note that if you are sending multiple requests in a loop and creating multiple new pool objects, maxSockets will not work as intended. To work around this, either use request.defaults with your pool options or create the pool object with the maxSockets property outside of the loop.
  • timeout - integer containing number of milliseconds, controls two timeouts.
    • Read timeout: Time to wait for a server to send response headers (and start the response body) before aborting the request.
    • Connection timeout: Sets the socket to timeout after timeout milliseconds of inactivity. Note that increasing the timeout beyond the OS-wide TCP connection timeout will not have any effect (the default in Linux can be anywhere from 20-120 seconds)

  • localAddress - local interface to bind for network connections.
  • strictSSL - if true, requires SSL certificates be valid. Note: to use your own certificate authority, you need to specify an agent that was created with that CA as an option.
  • tunnel - controls the behavior of HTTP CONNECT tunneling as follows:
    • undefined (default) - true if the destination is https, false otherwise
    • true - always tunnel to the destination by making a CONNECT request to the proxy
    • false - request the destination as a GET request.
  • proxyHeaderWhiteList - a whitelist of headers to send to a tunneling proxy.
  • proxyHeaderExclusiveList - a whitelist of headers to send exclusively to a tunneling proxy and not to destination.

  • time - if true, the request-response cycle (including all redirects) is timed at millisecond resolution. When set, the following properties are added to the response object:
    • elapsedTime Duration of the entire request/response in milliseconds (deprecated).
    • responseStartTime Timestamp when the response began (in Unix Epoch milliseconds) (deprecated).
    • timingStart Timestamp of the start of the request (in Unix Epoch milliseconds).
    • timings Contains event timestamps in millisecond resolution relative to timingStart. If there were redirects, the properties reflect the timings of the final request in the redirect chain:
      • socket Relative timestamp when the http module’s socket event fires. This happens when the socket is assigned to the request.
      • lookup Relative timestamp when the net module’s lookup event fires. This happens when the DNS has been resolved.
      • connect: Relative timestamp when the net module’s connect event fires. This happens when the server acknowledges the TCP connection.
      • response: Relative timestamp when the http module’s response event fires. This happens when the first bytes are received from the server.
      • end: Relative timestamp when the last bytes of the response are received.
    • timingPhases Contains the durations of each request phase. If there were redirects, the properties reflect the timings of the final request in the redirect chain:
      • wait: Duration of socket initialization (timings.socket)
      • dns: Duration of DNS lookup (timings.lookup - timings.socket)
      • tcp: Duration of TCP connection (timings.connect - timings.socket)
      • firstByte: Duration of HTTP server response (timings.response - timings.connect)
      • download: Duration of HTTP download (timings.end - timings.response)
      • total: Duration entire HTTP round-trip (timings.end)
  • callback - alternatively pass the request’s callback in the options object

The callback argument gets 3 arguments:

  1. An error when applicable (usually from http.ClientRequest object)
  2. An http.IncomingMessage object (Response object)
  3. The third is the response body (String or Buffer, or JSON object if the json option is supplied)

back to top


Convenience methods

There are also shorthand methods for different HTTP METHODs and some other conveniences.

request.defaults(options)

This method returns a wrapper around the normal request API that defaults to whatever options you pass to it.

Note: request.defaults() does not modify the global request API; instead, it returns a wrapper that has your default settings applied to it.

Note: You can call .defaults() on the wrapper that is returned from request.defaults to add/override defaults that were previously defaulted.

For example:

request.METHOD()

These HTTP method convenience functions act just like request() but with a default method already set for you:

  • request.get(): Defaults to method: "GET".
  • request.post(): Defaults to method: "POST".
  • request.put(): Defaults to method: "PUT".
  • request.patch(): Defaults to method: "PATCH".
  • request.del() / request.delete(): Defaults to method: "DELETE".
  • request.head(): Defaults to method: "HEAD".
  • request.options(): Defaults to method: "OPTIONS".

request.cookie()

Function that creates a new cookie.

request.jar()

Function that creates a new cookie jar.

response.caseless.get(‘header-name’)

Function that returns the specified response header field using a case-insensitive match

back to top


Debugging

There are at least three ways to debug the operation of request:

  1. Launch the node process like NODE_DEBUG=request node script.js (lib,request,otherlib works too).

  2. Set require('request').debug = true at any time (this does the same thing as #1).

  3. Use the request-debug module to view request and response headers and bodies.

back to top


Timeouts

Most requests to external servers should have a timeout attached, in case the server is not responding in a timely manner. Without a timeout, your code may have a socket open/consume resources for minutes or more.

There are two main types of timeouts: connection timeouts and read timeouts. A connect timeout occurs if the timeout is hit while your client is attempting to establish a connection to a remote machine (corresponding to the connect() call on the socket). A read timeout occurs any time the server is too slow to send back a part of the response.

These two situations have widely different implications for what went wrong with the request, so it’s useful to be able to distinguish them. You can detect timeout errors by checking err.code for an ‘ETIMEDOUT’ value. Further, you can detect whether the timeout was a connection timeout by checking if the err.connect property is set to true.

Examples:

For backwards-compatibility, response compression is not supported by default. To accept gzip-compressed responses, set the gzip option to true. Note that the body data passed through request is automatically decompressed while the response object is unmodified and will contain compressed data if the server sent a compressed response.

Cookies are disabled by default (else, they would be used in subsequent requests). To enable cookies, set jar to true (either in defaults or options).

To use a custom cookie jar (instead of request’s global cookie jar), set jar to an instance of request.jar() (either in defaults or options)

OR

To use a custom cookie store (such as a FileCookieStore which supports saving to and restoring from JSON files), pass it as a parameter to request.jar():

The cookie store must be a tough-cookie store and it must support synchronous operations; see the CookieStore API docs for details.

To inspect your cookie jar after a request:

back to top



Forge

npm package

Build status

A native implementation of TLS (and various other cryptographic tools) in JavaScript.

Introduction

Performance

Forge is fast. Benchmarks against other popular JavaScript cryptography libraries can be found here:

  • http://dominictarr.github.io/crypto-bench/
  • http://cryptojs.altervista.org/test/simulate-threading-speed_test.html

Documentation

API

Transports

Ciphers

PKI

Message Digests

Utilities

Other


Installation

Note: Please see the Security Considerations section before using packaging systems and pre-built files.

Forge uses a CommonJS module structure with a build process for browser bundles. The older 0.6.x branch with standalone files is available but will not be regularly updated.

Node.js

If you want to use forge with Node.js, it is available through npm:

https://npmjs.org/package/node-forge

Installation:

npm install node-forge

You can then use forge as a regular module:

The npm package includes pre-built forge.min.js, forge.all.min.js, and prime.worker.min.js using the UMD format.

Bundle / Bower

Each release is published in a separate repository as pre-built and minimized basic forge bundles using the UMD format.

https://github.com/digitalbazaar/forge-dist

This bundle can be used in many environments. In particular it can be installed with Bower:

bower install forge

jsDelivr CDN

To use it via jsDelivr include this in your html:

unpkg CDN

To use it via unpkg include this in your html:

Development Requirements

The core JavaScript has the following requirements to build and test:

  • Building a browser bundle:
    • Node.js
    • npm
  • Testing
    • Node.js
    • npm
    • Chrome, Firefox, Safari (optional)

Some special networking features can optionally use a Flash component. See the Flash README for details.

Building for a web browser

To create single file bundles for use with browsers run the following:

npm install
npm run build

This will create single non-minimized and minimized files that can be included in the browser:

dist/forge.js
dist/forge.min.js

A bundle that adds some utilities and networking support is also available:

dist/forge.all.js
dist/forge.all.min.js

Include the file via:

or

The above bundles will synchronously create a global ‘forge’ object.

Note: These bundles will not include any WebWorker scripts (eg: dist/prime.worker.js), so these will need to be accessible from the browser if any WebWorkers are used.

Building a custom browser bundle

The build process uses webpack and the config file can be modified to generate a file or files that only contain the parts of forge you need.

Browserify override support is also present in package.json.

Testing

Prepare to run tests

npm install

Running automated tests with Node.js

Forge natively runs in a Node.js environment:

npm test

Running automated tests with Headless Chrome

Automated testing is done via Karma. By default it will run the tests with Headless Chrome.

npm run test-karma

Is ‘mocha’ reporter output too verbose? Other reporters are available. Try ‘dots’, ‘progress’, or ‘tap’.

npm run test-karma -- --reporters progress

By default webpack is used. Browserify can also be used.

BUNDLER=browserify npm run test-karma

Running automated tests with one or more browsers

You can also specify one or more browsers to use.

npm run test-karma -- --browsers Chrome,Firefox,Safari,ChromeHeadless

The reporter option and BUNDLER environment variable can also be used.

Running manual tests in a browser

Testing in a browser uses webpack to combine forge and all tests and then loading the result in a browser. A simple web server is provided that will output the HTTP or HTTPS URLs to load. It also will start a simple Flash Policy Server. Unit tests and older legacy tests are provided. Custom ports can be used by running node tests/server.js manually.

To run the unit tests in a browser a special forge build is required:

npm run test-build

To run legacy browser based tests the main forge build is required:

npm run build

The tests are run with a custom server that prints out the URLs to use:

npm run test-server

Running other tests

There are some other random tests and benchmarks available in the tests directory.

Coverage testing

To perform coverage testing of the unit tests, run the following. The results will be put in the coverage/ directory. Note that coverage testing can slow down some tests considerably.

npm install
npm run coverage

Contributing

See: LICENSE

API

Options

If at any time you wish to disable the use of native code, where available, for particular forge features like its secure random number generator, you may set the forge.options.usePureJavaScript flag to true. It is not recommended that you set this flag as native code is typically more performant and may have stronger security properties. It may be useful to set this flag to test certain features that you plan to run in environments that are different from your testing environment.

To disable native code when including forge in the browser:

To disable native code when using Node.js:

Transports

TLS

Provides a native javascript client and server-side TLS implementation.

Examples

// create TLS client
var client = forge.tls.createConnection({
  server: false,
  caStore: /* Array of PEM-formatted certs or a CA store object */,
  sessionCache: {},
  // supported cipher suites in order of preference
  cipherSuites: [
    forge.tls.CipherSuites.TLS_RSA_WITH_AES_128_CBC_SHA,
    forge.tls.CipherSuites.TLS_RSA_WITH_AES_256_CBC_SHA],
  virtualHost: 'example.com',
  verify: function(connection, verified, depth, certs) {
    if(depth === 0) {
      var cn = certs[0].subject.getField('CN').value;
      if(cn !== 'example.com') {
        verified = {
          alert: forge.tls.Alert.Description.bad_certificate,
          message: 'Certificate common name does not match hostname.'
        };
      }
    }
    return verified;
  },
  connected: function(connection) {
    console.log('connected');
    // send message to server
    connection.prepare(forge.util.encodeUtf8('Hi server!'));
    /* NOTE: experimental, start heartbeat retransmission timer
    myHeartbeatTimer = setInterval(function() {
      connection.prepareHeartbeatRequest(forge.util.createBuffer('1234'));
    }, 5*60*1000);*/
  },
  /* provide a client-side cert if you want
  getCertificate: function(connection, hint) {
    return myClientCertificate;
  },
  /* the private key for the client-side cert if provided */
  getPrivateKey: function(connection, cert) {
    return myClientPrivateKey;
  },
  tlsDataReady: function(connection) {
    // TLS data (encrypted) is ready to be sent to the server
    sendToServerSomehow(connection.tlsData.getBytes());
    // if you were communicating with the server below, you'd do:
    // server.process(connection.tlsData.getBytes());
  },
  dataReady: function(connection) {
    // clear data from the server is ready
    console.log('the server sent: ' +
      forge.util.decodeUtf8(connection.data.getBytes()));
    // close connection
    connection.close();
  },
  /* NOTE: experimental
  heartbeatReceived: function(connection, payload) {
    // restart retransmission timer, look at payload
    clearInterval(myHeartbeatTimer);
    myHeartbeatTimer = setInterval(function() {
      connection.prepareHeartbeatRequest(forge.util.createBuffer('1234'));
    }, 5*60*1000);
    payload.getBytes();
  },*/
  closed: function(connection) {
    console.log('disconnected');
  },
  error: function(connection, error) {
    console.log('uh oh', error);
  }
});

// start the handshake process
client.handshake();

// when encrypted TLS data is received from the server, process it
client.process(encryptedBytesFromServer);

// create TLS server
var server = forge.tls.createConnection({
  server: true,
  caStore: /* Array of PEM-formatted certs or a CA store object */,
  sessionCache: {},
  // supported cipher suites in order of preference
  cipherSuites: [
    forge.tls.CipherSuites.TLS_RSA_WITH_AES_128_CBC_SHA,
    forge.tls.CipherSuites.TLS_RSA_WITH_AES_256_CBC_SHA],
  // require a client-side certificate if you want
  verifyClient: true,
  verify: function(connection, verified, depth, certs) {
    if(depth === 0) {
      var cn = certs[0].subject.getField('CN').value;
      if(cn !== 'the-client') {
        verified = {
          alert: forge.tls.Alert.Description.bad_certificate,
          message: 'Certificate common name does not match expected client.'
        };
      }
    }
    return verified;
  },
  connected: function(connection) {
    console.log('connected');
    // send message to client
    connection.prepare(forge.util.encodeUtf8('Hi client!'));
    /* NOTE: experimental, start heartbeat retransmission timer
    myHeartbeatTimer = setInterval(function() {
      connection.prepareHeartbeatRequest(forge.util.createBuffer('1234'));
    }, 5*60*1000);*/
  },
  getCertificate: function(connection, hint) {
    return myServerCertificate;
  },
  getPrivateKey: function(connection, cert) {
    return myServerPrivateKey;
  },
  tlsDataReady: function(connection) {
    // TLS data (encrypted) is ready to be sent to the client
    sendToClientSomehow(connection.tlsData.getBytes());
    // if you were communicating with the client above you'd do:
    // client.process(connection.tlsData.getBytes());
  },
  dataReady: function(connection) {
    // clear data from the client is ready
    console.log('the client sent: ' +
      forge.util.decodeUtf8(connection.data.getBytes()));
    // close connection
    connection.close();
  },
  /* NOTE: experimental
  heartbeatReceived: function(connection, payload) {
    // restart retransmission timer, look at payload
    clearInterval(myHeartbeatTimer);
    myHeartbeatTimer = setInterval(function() {
      connection.prepareHeartbeatRequest(forge.util.createBuffer('1234'));
    }, 5*60*1000);
    payload.getBytes();
  },*/
  closed: function(connection) {
    console.log('disconnected');
  },
  error: function(connection, error) {
    console.log('uh oh', error);
  }
});

// when encrypted TLS data is received from the client, process it
server.process(encryptedBytesFromClient);

Connect to a TLS server using node’s net.Socket:

HTTP

Provides a native JavaScript mini-implementation of an http client that uses pooled sockets.

Examples

SSH

Provides some SSH utility functions.

Examples

XHR

Provides an XmlHttpRequest implementation using forge.http as a backend.

Examples

Sockets

Provides an interface to create and use raw sockets provided via Flash.

Examples

Ciphers

CIPHER

Provides a basic API for block encryption and decryption. There is built-in support for the ciphers: AES, 3DES, and DES, and for the modes of operation: ECB, CBC, CFB, OFB, CTR, and GCM.

These algorithms are currently supported:

  • AES-ECB
  • AES-CBC
  • AES-CFB
  • AES-OFB
  • AES-CTR
  • AES-GCM
  • 3DES-ECB
  • 3DES-CBC
  • DES-ECB
  • DES-CBC

When using an AES algorithm, the key size will determine whether AES-128, AES-192, or AES-256 is used (all are supported). When a DES algorithm is used, the key size will determine whether 3DES or regular DES is used. Use a 3DES algorithm to enforce Triple-DES.

Examples

// generate a random key and IV
// Note: a key size of 16 bytes will use AES-128, 24 => AES-192, 32 => AES-256
var key = forge.random.getBytesSync(16);
var iv = forge.random.getBytesSync(16);

/* alternatively, generate a password-based 16-byte key
var salt = forge.random.getBytesSync(128);
var key = forge.pkcs5.pbkdf2('password', salt, numIterations, 16);
*/

// encrypt some bytes using CBC mode
// (other modes include: ECB, CFB, OFB, CTR, and GCM)
// Note: CBC and ECB modes use PKCS#7 padding as default
var cipher = forge.cipher.createCipher('AES-CBC', key);
cipher.start({iv: iv});
cipher.update(forge.util.createBuffer(someBytes));
cipher.finish();
var encrypted = cipher.output;
// outputs encrypted hex
console.log(encrypted.toHex());

// decrypt some bytes using CBC mode
// (other modes include: CFB, OFB, CTR, and GCM)
var decipher = forge.cipher.createDecipher('AES-CBC', key);
decipher.start({iv: iv});
decipher.update(encrypted);
var result = decipher.finish(); // check 'result' for true/false
// outputs decrypted hex
console.log(decipher.output.toHex());

// decrypt bytes using CBC mode and streaming
// Performance can suffer for large multi-MB inputs due to buffer
// manipulations. Stream processing in chunks can offer significant
// improvement. CPU intensive update() calls could also be performed with
// setImmediate/setTimeout to avoid blocking the main browser UI thread (not
// shown here). Optimal block size depends on the JavaScript VM and other
// factors. Encryption can use a simple technique for increased performance.
var encryptedBytes = encrypted.bytes();
var decipher = forge.cipher.createDecipher('AES-CBC', key);
decipher.start({iv: iv});
var length = encryptedBytes.length;
var chunkSize = 1024 * 64;
var index = 0;
var decrypted = '';
do {
  decrypted += decipher.output.getBytes();
  var buf = forge.util.createBuffer(encryptedBytes.substr(index, chunkSize));
  decipher.update(buf);
  index += chunkSize;
} while(index < length);
var result = decipher.finish();
assert(result);
decrypted += decipher.output.getBytes();
console.log(forge.util.bytesToHex(decrypted));

// encrypt some bytes using GCM mode
var cipher = forge.cipher.createCipher('AES-GCM', key);
cipher.start({
  iv: iv, // should be a 12-byte binary-encoded string or byte buffer
  additionalData: 'binary-encoded string', // optional
  tagLength: 128 // optional, defaults to 128 bits
});
cipher.update(forge.util.createBuffer(someBytes));
cipher.finish();
var encrypted = cipher.output;
var tag = cipher.mode.tag;
// outputs encrypted hex
console.log(encrypted.toHex());
// outputs authentication tag
console.log(tag.toHex());

// decrypt some bytes using GCM mode
var decipher = forge.cipher.createDecipher('AES-GCM', key);
decipher.start({
  iv: iv,
  additionalData: 'binary-encoded string', // optional
  tagLength: 128, // optional, defaults to 128 bits
  tag: tag // authentication tag from encryption
});
decipher.update(encrypted);
var pass = decipher.finish();
// pass is false if there was a failure (eg: authentication tag didn't match)
if(pass) {
  // outputs decrypted hex
  console.log(decipher.output.toHex());
}

Using forge in Node.js to match openssl’s “enc” command line tool (Note: OpenSSL “enc” uses a non-standard file format with a custom key derivation function and a fixed iteration count of 1, which some consider less secure than alternatives such as OpenPGP/GnuPG):

var forge = require('node-forge');
var fs = require('fs');

// openssl enc -des3 -in input.txt -out input.enc
function encrypt(password) {
  var input = fs.readFileSync('input.txt', {encoding: 'binary'});

  // 3DES key and IV sizes
  var keySize = 24;
  var ivSize = 8;

  // get derived bytes
  // Notes:
  // 1. If using an alternative hash (eg: "-md sha1") pass
  //   "forge.md.sha1.create()" as the final parameter.
  // 2. If using "-nosalt", set salt to null.
  var salt = forge.random.getBytesSync(8);
  // var md = forge.md.sha1.create(); // "-md sha1"
  var derivedBytes = forge.pbe.opensslDeriveBytes(
    password, salt, keySize + ivSize/*, md*/);
  var buffer = forge.util.createBuffer(derivedBytes);
  var key = buffer.getBytes(keySize);
  var iv = buffer.getBytes(ivSize);

  var cipher = forge.cipher.createCipher('3DES-CBC', key);
  cipher.start({iv: iv});
  cipher.update(forge.util.createBuffer(input, 'binary'));
  cipher.finish();

  var output = forge.util.createBuffer();

  // if using a salt, prepend this to the output:
  if(salt !== null) {
    output.putBytes('Salted__'); // (add to match openssl tool output)
    output.putBytes(salt);
  }
  output.putBuffer(cipher.output);

  fs.writeFileSync('input.enc', output.getBytes(), {encoding: 'binary'});
}

// openssl enc -d -des3 -in input.enc -out input.dec.txt
function decrypt(password) {
  var input = fs.readFileSync('input.enc', {encoding: 'binary'});

  // parse salt from input
  input = forge.util.createBuffer(input, 'binary');
  // skip "Salted__" (if known to be present)
  input.getBytes('Salted__'.length);
  // read 8-byte salt
  var salt = input.getBytes(8);

  // Note: if using "-nosalt", skip above parsing and use
  // var salt = null;

  // 3DES key and IV sizes
  var keySize = 24;
  var ivSize = 8;

  var derivedBytes = forge.pbe.opensslDeriveBytes(
    password, salt, keySize + ivSize);
  var buffer = forge.util.createBuffer(derivedBytes);
  var key = buffer.getBytes(keySize);
  var iv = buffer.getBytes(ivSize);

  var decipher = forge.cipher.createDecipher('3DES-CBC', key);
  decipher.start({iv: iv});
  decipher.update(input);
  var result = decipher.finish(); // check 'result' for true/false

  fs.writeFileSync(
    'input.dec.txt', decipher.output.getBytes(), {encoding: 'binary'});
}

AES

Provides AES encryption and decryption in CBC, CFB, OFB, CTR, and GCM modes. See CIPHER for examples.

DES

Provides 3DES and DES encryption and decryption in ECB and CBC modes. See CIPHER for examples.

RC2

Examples

PKI

Provides X.509 certificate support, ED25519 key generation and signing/verifying, and RSA public and private key encoding, decoding, encryption/decryption, and signing/verifying.

ED25519

Special thanks to TweetNaCl.js for providing the bulk of the implementation.

Examples

var ed25519 = forge.pki.ed25519;

// generate a random ED25519 keypair
var keypair = ed25519.generateKeyPair();
// `keypair.publicKey` is a node.js Buffer or Uint8Array
// `keypair.privateKey` is a node.js Buffer or Uint8Array

// generate a random ED25519 keypair based on a random 32-byte seed
var seed = forge.random.getBytesSync(32);
var keypair = ed25519.generateKeyPair({seed: seed});

// generate a random ED25519 keypair based on a "password" 32-byte seed
var password = 'Mai9ohgh6ahxee0jutheew0pungoozil';
var seed = new forge.util.ByteBuffer(password, 'utf8');
var keypair = ed25519.generateKeyPair({seed: seed});

// sign a UTF-8 message
var signature = ED25519.sign({
  message: 'test',
  // also accepts `binary` if you want to pass a binary string
  encoding: 'utf8',
  // node.js Buffer, Uint8Array, forge ByteBuffer, binary string
  privateKey: privateKey
});
// `signature` is a node.js Buffer or Uint8Array

// sign a message passed as a buffer
var signature = ED25519.sign({
  // also accepts a forge ByteBuffer or Uint8Array
  message: Buffer.from('test', 'utf8'),
  privateKey: privateKey
});

// sign a message digest (shorter "message" == better performance)
var md = forge.md.sha256.create();
md.update('test', 'utf8');
var signature = ED25519.sign({
  md: md,
  privateKey: privateKey
});

// verify a signature on a UTF-8 message
var verified = ED25519.verify({
  message: 'test',
  encoding: 'utf8',
  // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string
  signature: signature,
  // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string
  publicKey: publicKey
});
// `verified` is true/false

// sign a message passed as a buffer
var verified = ED25519.verify({
  // also accepts a forge ByteBuffer or Uint8Array
  message: Buffer.from('test', 'utf8'),
  // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string
  signature: signature,
  // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string
  publicKey: publicKey
});

// verify a signature on a message digest
var md = forge.md.sha256.create();
md.update('test', 'utf8');
var verified = ED25519.verify({
  md: md,
  // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string
  signature: signature,
  // node.js Buffer, Uint8Array, forge ByteBuffer, or binary string
  publicKey: publicKey
});

RSA

Examples

var rsa = forge.pki.rsa;

// generate an RSA key pair synchronously
// *NOT RECOMMENDED*: Can be significantly slower than async and may block
// JavaScript execution. Will use native Node.js 10.12.0+ API if possible.
var keypair = rsa.generateKeyPair({bits: 2048, e: 0x10001});

// generate an RSA key pair asynchronously (uses web workers if available)
// use workers: -1 to run a fast core estimator to optimize # of workers
// *RECOMMENDED*: Can be significantly faster than sync. Will use native
// Node.js 10.12.0+ or WebCrypto API if possible.
rsa.generateKeyPair({bits: 2048, workers: 2}, function(err, keypair) {
  // keypair.privateKey, keypair.publicKey
});

// generate an RSA key pair in steps that attempt to run for a specified period
// of time on the main JS thread
var state = rsa.createKeyPairGenerationState(2048, 0x10001);
var step = function() {
  // run for 100 ms
  if(!rsa.stepKeyPairGenerationState(state, 100)) {
    setTimeout(step, 1);
  }
  else {
    // done, turn off progress indicator, use state.keys
  }
};
// turn on progress indicator, schedule generation to run
setTimeout(step);

// sign data with a private key and output DigestInfo DER-encoded bytes
// (defaults to RSASSA PKCS#1 v1.5)
var md = forge.md.sha1.create();
md.update('sign this', 'utf8');
var signature = privateKey.sign(md);

// verify data with a public key
// (defaults to RSASSA PKCS#1 v1.5)
var verified = publicKey.verify(md.digest().bytes(), signature);

// sign data using RSASSA-PSS where PSS uses a SHA-1 hash, a SHA-1 based
// masking function MGF1, and a 20 byte salt
var md = forge.md.sha1.create();
md.update('sign this', 'utf8');
var pss = forge.pss.create({
  md: forge.md.sha1.create(),
  mgf: forge.mgf.mgf1.create(forge.md.sha1.create()),
  saltLength: 20
  // optionally pass 'prng' with a custom PRNG implementation
  // optionalls pass 'salt' with a forge.util.ByteBuffer w/custom salt
});
var signature = privateKey.sign(md, pss);

// verify RSASSA-PSS signature
var pss = forge.pss.create({
  md: forge.md.sha1.create(),
  mgf: forge.mgf.mgf1.create(forge.md.sha1.create()),
  saltLength: 20
  // optionally pass 'prng' with a custom PRNG implementation
});
var md = forge.md.sha1.create();
md.update('sign this', 'utf8');
publicKey.verify(md.digest().getBytes(), signature, pss);

// encrypt data with a public key (defaults to RSAES PKCS#1 v1.5)
var encrypted = publicKey.encrypt(bytes);

// decrypt data with a private key (defaults to RSAES PKCS#1 v1.5)
var decrypted = privateKey.decrypt(encrypted);

// encrypt data with a public key using RSAES PKCS#1 v1.5
var encrypted = publicKey.encrypt(bytes, 'RSAES-PKCS1-V1_5');

// decrypt data with a private key using RSAES PKCS#1 v1.5
var decrypted = privateKey.decrypt(encrypted, 'RSAES-PKCS1-V1_5');

// encrypt data with a public key using RSAES-OAEP
var encrypted = publicKey.encrypt(bytes, 'RSA-OAEP');

// decrypt data with a private key using RSAES-OAEP
var decrypted = privateKey.decrypt(encrypted, 'RSA-OAEP');

// encrypt data with a public key using RSAES-OAEP/SHA-256
var encrypted = publicKey.encrypt(bytes, 'RSA-OAEP', {
  md: forge.md.sha256.create()
});

// decrypt data with a private key using RSAES-OAEP/SHA-256
var decrypted = privateKey.decrypt(encrypted, 'RSA-OAEP', {
  md: forge.md.sha256.create()
});

// encrypt data with a public key using RSAES-OAEP/SHA-256/MGF1-SHA-1
// compatible with Java's RSA/ECB/OAEPWithSHA-256AndMGF1Padding
var encrypted = publicKey.encrypt(bytes, 'RSA-OAEP', {
  md: forge.md.sha256.create(),
  mgf1: {
    md: forge.md.sha1.create()
  }
});

// decrypt data with a private key using RSAES-OAEP/SHA-256/MGF1-SHA-1
// compatible with Java's RSA/ECB/OAEPWithSHA-256AndMGF1Padding
var decrypted = privateKey.decrypt(encrypted, 'RSA-OAEP', {
  md: forge.md.sha256.create(),
  mgf1: {
    md: forge.md.sha1.create()
  }
});

RSA-KEM

Examples

X.509

Examples

var pki = forge.pki;

// convert a PEM-formatted public key to a Forge public key
var publicKey = pki.publicKeyFromPem(pem);

// convert a Forge public key to PEM-format
var pem = pki.publicKeyToPem(publicKey);

// convert an ASN.1 SubjectPublicKeyInfo to a Forge public key
var publicKey = pki.publicKeyFromAsn1(subjectPublicKeyInfo);

// convert a Forge public key to an ASN.1 SubjectPublicKeyInfo
var subjectPublicKeyInfo = pki.publicKeyToAsn1(publicKey);

// gets a SHA-1 RSAPublicKey fingerprint a byte buffer
pki.getPublicKeyFingerprint(key);

// gets a SHA-1 SubjectPublicKeyInfo fingerprint a byte buffer
pki.getPublicKeyFingerprint(key, {type: 'SubjectPublicKeyInfo'});

// gets a hex-encoded, colon-delimited SHA-1 RSAPublicKey public key fingerprint
pki.getPublicKeyFingerprint(key, {encoding: 'hex', delimiter: ':'});

// gets a hex-encoded, colon-delimited SHA-1 SubjectPublicKeyInfo public key fingerprint
pki.getPublicKeyFingerprint(key, {
  type: 'SubjectPublicKeyInfo',
  encoding: 'hex',
  delimiter: ':'
});

// gets a hex-encoded, colon-delimited MD5 RSAPublicKey public key fingerprint
pki.getPublicKeyFingerprint(key, {
  md: forge.md.md5.create(),
  encoding: 'hex',
  delimiter: ':'
});

// creates a CA store
var caStore = pki.createCaStore([/* PEM-encoded cert */, ...]);

// add a certificate to the CA store
caStore.addCertificate(certObjectOrPemString);

// gets the issuer (its certificate) for the given certificate
var issuerCert = caStore.getIssuer(subjectCert);

// verifies a certificate chain against a CA store
pki.verifyCertificateChain(caStore, chain, customVerifyCallback);

// signs a certificate using the given private key
cert.sign(privateKey);

// signs a certificate using SHA-256 instead of SHA-1
cert.sign(privateKey, forge.md.sha256.create());

// verifies an issued certificate using the certificates public key
var verified = issuer.verify(issued);

// generate a keypair and create an X.509v3 certificate
var keys = pki.rsa.generateKeyPair(2048);
var cert = pki.createCertificate();
cert.publicKey = keys.publicKey;
// alternatively set public key from a csr
//cert.publicKey = csr.publicKey;
// NOTE: serialNumber is the hex encoded value of an ASN.1 INTEGER.
// Conforming CAs should ensure serialNumber is:
// - no more than 20 octets
// - non-negative (prefix a '00' if your value starts with a '1' bit)
cert.serialNumber = '01';
cert.validity.notBefore = new Date();
cert.validity.notAfter = new Date();
cert.validity.notAfter.setFullYear(cert.validity.notBefore.getFullYear() + 1);
var attrs = [{
  name: 'commonName',
  value: 'example.org'
}, {
  name: 'countryName',
  value: 'US'
}, {
  shortName: 'ST',
  value: 'Virginia'
}, {
  name: 'localityName',
  value: 'Blacksburg'
}, {
  name: 'organizationName',
  value: 'Test'
}, {
  shortName: 'OU',
  value: 'Test'
}];
cert.setSubject(attrs);
// alternatively set subject from a csr
//cert.setSubject(csr.subject.attributes);
cert.setIssuer(attrs);
cert.setExtensions([{
  name: 'basicConstraints',
  cA: true
}, {
  name: 'keyUsage',
  keyCertSign: true,
  digitalSignature: true,
  nonRepudiation: true,
  keyEncipherment: true,
  dataEncipherment: true
}, {
  name: 'extKeyUsage',
  serverAuth: true,
  clientAuth: true,
  codeSigning: true,
  emailProtection: true,
  timeStamping: true
}, {
  name: 'nsCertType',
  client: true,
  server: true,
  email: true,
  objsign: true,
  sslCA: true,
  emailCA: true,
  objCA: true
}, {
  name: 'subjectAltName',
  altNames: [{
    type: 6, // URI
    value: 'http://example.org/webid#me'
  }, {
    type: 7, // IP
    ip: '127.0.0.1'
  }]
}, {
  name: 'subjectKeyIdentifier'
}]);
/* alternatively set extensions from a csr
var extensions = csr.getAttribute({name: 'extensionRequest'}).extensions;
// optionally add more extensions
extensions.push.apply(extensions, [{
  name: 'basicConstraints',
  cA: true
}, {
  name: 'keyUsage',
  keyCertSign: true,
  digitalSignature: true,
  nonRepudiation: true,
  keyEncipherment: true,
  dataEncipherment: true
}]);
cert.setExtensions(extensions);
*/
// self-sign certificate
cert.sign(keys.privateKey);

// convert a Forge certificate to PEM
var pem = pki.certificateToPem(cert);

// convert a Forge certificate from PEM
var cert = pki.certificateFromPem(pem);

// convert an ASN.1 X.509x3 object to a Forge certificate
var cert = pki.certificateFromAsn1(obj);

// convert a Forge certificate to an ASN.1 X.509v3 object
var asn1Cert = pki.certificateToAsn1(cert);

PKCS#5

Provides the password-based key-derivation function from PKCS#5.

Examples

PKCS#7

Provides cryptographically protected messages from PKCS#7.

Examples

// convert a message from PEM
var p7 = forge.pkcs7.messageFromPem(pem);
// look at p7.recipients

// find a recipient by the issuer of a certificate
var recipient = p7.findRecipient(cert);

// decrypt
p7.decrypt(p7.recipients[0], privateKey);

// create a p7 enveloped message
var p7 = forge.pkcs7.createEnvelopedData();

// add a recipient
var cert = forge.pki.certificateFromPem(certPem);
p7.addRecipient(cert);

// set content
p7.content = forge.util.createBuffer('Hello');

// encrypt
p7.encrypt();

// convert message to PEM
var pem = forge.pkcs7.messageToPem(p7);

// create a degenerate PKCS#7 certificate container
// (CRLs not currently supported, only certificates)
var p7 = forge.pkcs7.createSignedData();
p7.addCertificate(certOrCertPem1);
p7.addCertificate(certOrCertPem2);
var pem = forge.pkcs7.messageToPem(p7);

// create PKCS#7 signed data with authenticatedAttributes
// attributes include: PKCS#9 content-type, message-digest, and signing-time
var p7 = forge.pkcs7.createSignedData();
p7.content = forge.util.createBuffer('Some content to be signed.', 'utf8');
p7.addCertificate(certOrCertPem);
p7.addSigner({
  key: privateKeyAssociatedWithCert,
  certificate: certOrCertPem,
  digestAlgorithm: forge.pki.oids.sha256,
  authenticatedAttributes: [{
    type: forge.pki.oids.contentType,
    value: forge.pki.oids.data
  }, {
    type: forge.pki.oids.messageDigest
    // value will be auto-populated at signing time
  }, {
    type: forge.pki.oids.signingTime,
    // value can also be auto-populated at signing time
    value: new Date()
  }]
});
p7.sign();
var pem = forge.pkcs7.messageToPem(p7);

// PKCS#7 Sign in detached mode.
// Includes the signature and certificate without the signed data.
p7.sign({detached: true});

PKCS#8

Examples

var pki = forge.pki;

// convert a PEM-formatted private key to a Forge private key
var privateKey = pki.privateKeyFromPem(pem);

// convert a Forge private key to PEM-format
var pem = pki.privateKeyToPem(privateKey);

// convert an ASN.1 PrivateKeyInfo or RSAPrivateKey to a Forge private key
var privateKey = pki.privateKeyFromAsn1(rsaPrivateKey);

// convert a Forge private key to an ASN.1 RSAPrivateKey
var rsaPrivateKey = pki.privateKeyToAsn1(privateKey);

// wrap an RSAPrivateKey ASN.1 object in a PKCS#8 ASN.1 PrivateKeyInfo
var privateKeyInfo = pki.wrapRsaPrivateKey(rsaPrivateKey);

// convert a PKCS#8 ASN.1 PrivateKeyInfo to PEM
var pem = pki.privateKeyInfoToPem(privateKeyInfo);

// encrypts a PrivateKeyInfo using a custom password and
// outputs an EncryptedPrivateKeyInfo
var encryptedPrivateKeyInfo = pki.encryptPrivateKeyInfo(
  privateKeyInfo, 'myCustomPasswordHere', {
    algorithm: 'aes256', // 'aes128', 'aes192', 'aes256', '3des'
  });

// decrypts an ASN.1 EncryptedPrivateKeyInfo that was encrypted
// with a custom password
var privateKeyInfo = pki.decryptPrivateKeyInfo(
  encryptedPrivateKeyInfo, 'myCustomPasswordHere');

// converts an EncryptedPrivateKeyInfo to PEM
var pem = pki.encryptedPrivateKeyToPem(encryptedPrivateKeyInfo);

// converts a PEM-encoded EncryptedPrivateKeyInfo to ASN.1 format
var encryptedPrivateKeyInfo = pki.encryptedPrivateKeyFromPem(pem);

// wraps and encrypts a Forge private key and outputs it in PEM format
var pem = pki.encryptRsaPrivateKey(privateKey, 'password');

// encrypts a Forge private key and outputs it in PEM format using OpenSSL's
// proprietary legacy format + encapsulated PEM headers (DEK-Info)
var pem = pki.encryptRsaPrivateKey(privateKey, 'password', {legacy: true});

// decrypts a PEM-formatted, encrypted private key
var privateKey = pki.decryptRsaPrivateKey(pem, 'password');

// sets an RSA public key from a private key
var publicKey = pki.setRsaPublicKey(privateKey.n, privateKey.e);

PKCS#10

Provides certification requests or certificate signing requests (CSR) from PKCS#10.

Examples

PKCS#12

Provides the cryptographic archive file format from PKCS#12.

Note for Chrome/Firefox/iOS/similar users: If you have trouble importing a PKCS#12 container, try using the TripleDES algorithm. It can be passed to forge.pkcs12.toPkcs12Asn1 using the {algorithm: '3des'} option.

Examples

// decode p12 from base64
var p12Der = forge.util.decode64(p12b64);
// get p12 as ASN.1 object
var p12Asn1 = forge.asn1.fromDer(p12Der);
// decrypt p12 using the password 'password'
var p12 = forge.pkcs12.pkcs12FromAsn1(p12Asn1, 'password');
// decrypt p12 using non-strict parsing mode (resolves some ASN.1 parse errors)
var p12 = forge.pkcs12.pkcs12FromAsn1(p12Asn1, false, 'password');
// decrypt p12 using literally no password (eg: Mac OS X/apple push)
var p12 = forge.pkcs12.pkcs12FromAsn1(p12Asn1);
// decrypt p12 using an "empty" password (eg: OpenSSL with no password input)
var p12 = forge.pkcs12.pkcs12FromAsn1(p12Asn1, '');
// p12.safeContents is an array of safe contents, each of
// which contains an array of safeBags

// get bags by friendlyName
var bags = p12.getBags({friendlyName: 'test'});
// bags are key'd by attribute type (here "friendlyName")
// and the key values are an array of matching objects
var cert = bags.friendlyName[0];

// get bags by localKeyId
var bags = p12.getBags({localKeyId: buffer});
// bags are key'd by attribute type (here "localKeyId")
// and the key values are an array of matching objects
var cert = bags.localKeyId[0];

// get bags by localKeyId (input in hex)
var bags = p12.getBags({localKeyIdHex: '7b59377ff142d0be4565e9ac3d396c01401cd879'});
// bags are key'd by attribute type (here "localKeyId", *not* "localKeyIdHex")
// and the key values are an array of matching objects
var cert = bags.localKeyId[0];

// get bags by type
var bags = p12.getBags({bagType: forge.pki.oids.certBag});
// bags are key'd by bagType and each bagType key's value
// is an array of matches (in this case, certificate objects)
var cert = bags[forge.pki.oids.certBag][0];

// get bags by friendlyName and filter on bag type
var bags = p12.getBags({
  friendlyName: 'test',
  bagType: forge.pki.oids.certBag
});

// get key bags
var bags = p12.getBags({bagType: forge.pki.oids.keyBag});
// get key
var bag = bags[forge.pki.oids.keyBag][0];
var key = bag.key;
// if the key is in a format unrecognized by forge then
// bag.key will be `null`, use bag.asn1 to get the ASN.1
// representation of the key
if(bag.key === null) {
  var keyAsn1 = bag.asn1;
  // can now convert back to DER/PEM/etc for export
}

// generate a p12 using AES (default)
var p12Asn1 = forge.pkcs12.toPkcs12Asn1(
  privateKey, certificateChain, 'password');

// generate a p12 that can be imported by Chrome/Firefox/iOS
// (requires the use of Triple DES instead of AES)
var p12Asn1 = forge.pkcs12.toPkcs12Asn1(
  privateKey, certificateChain, 'password',
  {algorithm: '3des'});

// base64-encode p12
var p12Der = forge.asn1.toDer(p12Asn1).getBytes();
var p12b64 = forge.util.encode64(p12Der);

// create download link for p12
var a = document.createElement('a');
a.download = 'example.p12';
a.setAttribute('href', 'data:application/x-pkcs12;base64,' + p12b64);
a.appendChild(document.createTextNode('Download'));

ASN.1

Provides ASN.1 DER encoding and decoding.

Examples

var asn1 = forge.asn1;

// create a SubjectPublicKeyInfo
var subjectPublicKeyInfo =
  asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SEQUENCE, true, [
    // AlgorithmIdentifier
    asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SEQUENCE, true, [
      // algorithm
      asn1.create(asn1.Class.UNIVERSAL, asn1.Type.OID, false,
        asn1.oidToDer(pki.oids['rsaEncryption']).getBytes()),
      // parameters (null)
      asn1.create(asn1.Class.UNIVERSAL, asn1.Type.NULL, false, '')
    ]),
    // subjectPublicKey
    asn1.create(asn1.Class.UNIVERSAL, asn1.Type.BITSTRING, false, [
      // RSAPublicKey
      asn1.create(asn1.Class.UNIVERSAL, asn1.Type.SEQUENCE, true, [
        // modulus (n)
        asn1.create(asn1.Class.UNIVERSAL, asn1.Type.INTEGER, false,
          _bnToBytes(key.n)),
        // publicExponent (e)
        asn1.create(asn1.Class.UNIVERSAL, asn1.Type.INTEGER, false,
          _bnToBytes(key.e))
      ])
    ])
  ]);

// serialize an ASN.1 object to DER format
var derBuffer = asn1.toDer(subjectPublicKeyInfo);

// deserialize to an ASN.1 object from a byte buffer filled with DER data
var object = asn1.fromDer(derBuffer);

// convert an OID dot-separated string to a byte buffer
var derOidBuffer = asn1.oidToDer('1.2.840.113549.1.1.5');

// convert a byte buffer with a DER-encoded OID to a dot-separated string
console.log(asn1.derToOid(derOidBuffer));
// output: 1.2.840.113549.1.1.5

// validates that an ASN.1 object matches a particular ASN.1 structure and
// captures data of interest from that structure for easy access
var publicKeyValidator = {
  name: 'SubjectPublicKeyInfo',
  tagClass: asn1.Class.UNIVERSAL,
  type: asn1.Type.SEQUENCE,
  constructed: true,
  captureAsn1: 'subjectPublicKeyInfo',
  value: [{
    name: 'SubjectPublicKeyInfo.AlgorithmIdentifier',
    tagClass: asn1.Class.UNIVERSAL,
    type: asn1.Type.SEQUENCE,
    constructed: true,
    value: [{
      name: 'AlgorithmIdentifier.algorithm',
      tagClass: asn1.Class.UNIVERSAL,
      type: asn1.Type.OID,
      constructed: false,
      capture: 'publicKeyOid'
    }]
  }, {
    // subjectPublicKey
    name: 'SubjectPublicKeyInfo.subjectPublicKey',
    tagClass: asn1.Class.UNIVERSAL,
    type: asn1.Type.BITSTRING,
    constructed: false,
    value: [{
      // RSAPublicKey
      name: 'SubjectPublicKeyInfo.subjectPublicKey.RSAPublicKey',
      tagClass: asn1.Class.UNIVERSAL,
      type: asn1.Type.SEQUENCE,
      constructed: true,
      optional: true,
      captureAsn1: 'rsaPublicKey'
    }]
  }]
};

var capture = {};
var errors = [];
if(!asn1.validate(
  publicKeyValidator, subjectPublicKeyInfo, validator, capture, errors)) {
  throw 'ASN.1 object is not a SubjectPublicKeyInfo.';
}
// capture.subjectPublicKeyInfo contains the full ASN.1 object
// capture.rsaPublicKey contains the full ASN.1 object for the RSA public key
// capture.publicKeyOid only contains the value for the OID
var oid = asn1.derToOid(capture.publicKeyOid);
if(oid !== pki.oids['rsaEncryption']) {
  throw 'Unsupported OID.';
}

// pretty print an ASN.1 object to a string for debugging purposes
asn1.prettyPrint(object);

Message Digests

SHA1

Provides SHA-1 message digests.

Examples

SHA256

Provides SHA-256 message digests.

Examples

SHA384

Provides SHA-384 message digests.

Examples

SHA512

Provides SHA-512 message digests.

Examples

MD5

Provides MD5 message digests.

Examples

HMAC

Provides HMAC w/any supported message digest algorithm.

Examples

Utilities

Prime

Provides an API for generating large, random, probable primes.

Examples

PRNG

Provides a Fortuna-based cryptographically-secure pseudo-random number generator, to be used with a cryptographic function backend, e.g. AES. An implementation using AES as a backend is provided. An API for collecting entropy is given, though if window.crypto.getRandomValues is available, it will be used automatically.

Examples

Tasks

Provides queuing and synchronizing tasks in a web application.

Examples

Utilities

Provides utility functions, including byte buffer support, base64, bytes to/from hex, zlib inflate/deflate, etc.

Examples

Logging

Provides logging to a javascript console using various categories and levels of verbosity.

Examples

Debugging

Provides storage of debugging information normally inaccessible in closures for viewing/investigation.

Examples

The flash README provides details on rebuilding the optional Flash component used for networking. It also provides details on Policy Server support.

Security Considerations

When using this code please keep the following in mind:

  • Cryptography is hard. Please review and test this code before depending on it for critical functionality.
  • The nature of JavaScript is that execution of this code depends on trusting a very large set of JavaScript tools and systems. Consider runtime variations, runtime characteristics, runtime optimization, code optimization, code minimization, code obfuscation, bundling tools, possible bugs, the Forge code itself, and so on.
  • If using pre-built bundles from Bower or similar be aware someone else ran the tools to create those files.
  • Use a secure transport channel such as TLS to load scripts and consider using additional security mechanisms such as Subresource Integrity script attributes.
  • Use “native” functionality where possible. This can be critical when dealing with performance and random number generation. Note that the JavaScript random number algorithms should perform well if given suitable entropy.
  • Understand possible attacks against cryptographic systems. For instance side channel and timing attacks may be possible due to the difficulty in implementing constant time algorithms in pure JavaScript.
  • Certain features in this library are less susceptible to attacks depending on usage. This primarily includes features that deal with data format manipulation or those that are not involved in communication.

Library Background

  • https://digitalbazaar.com/2010/07/20/javascript-tls-1/
  • https://digitalbazaar.com/2010/07/20/javascript-tls-2/

Contact

  • Code: https://github.com/digitalbazaar/forge
  • Bugs: https://github.com/digitalbazaar/forge/issues
  • Email: support@digitalbazaar.com
  • IRC: #forgejs on freenode

Donations

Financial support is welcome and helps contribute to futher development:

  • For PayPal please send to paypal@digitalbazaar.com.
  • Something else? Please contact support@digitalbazaar.com.


Enquirer

version travis downloads



Stylish CLI prompts that are user-friendly, intuitive and easy to create.
>_ Prompts should be more like conversations than inquisitions▌


(Example shows Enquirer’s Survey Prompt) Enquirer Survey Prompt
The terminal in all examples is Hyper, theme is hyper-monokai-extended.

See more prompt examples



Created by jonschlinkert and doowb, Enquirer is fast, easy to use, and lightweight enough for small projects, while also being powerful and customizable enough for the most advanced use cases.

  • Fast - Loads in ~4ms (that’s about 3-4 times faster than a single frame of a HD movie at 60fps)
  • Lightweight - Only one dependency, the excellent ansi-colors by Brian Woodward.
  • Easy to implement - Uses promises and async/await and sensible defaults to make prompts easy to create and implement.
  • Easy to use - Thrill your users with a better experience! Navigating around input and choices is a breeze. You can even create quizzes, or record and playback key bindings to aid with tutorials and videos.
  • Intuitive - Keypress combos are available to simplify usage.
  • Flexible - All prompts can be used standalone or chained together.
  • Stylish - Easily override semantic styles and symbols for any part of the prompt.
  • Extensible - Easily create and use custom prompts by extending Enquirer’s built-in prompts.
  • Pluggable - Add advanced features to Enquirer using plugins.
  • Validation - Optionally validate user input with any prompt.
  • Well tested - All prompts are well-tested, and tests are easy to create without having to use brittle, hacky solutions to spy on prompts or “inject” values.
  • Examples - There are numerous examples available to help you get started.

If you like Enquirer, please consider starring or tweeting about this project to show your support. Thanks!


>_ Ready to start making prompts your users will love? ▌
Enquirer Select Prompt with heartbeat example



❯ Getting started

Get started with Enquirer, the most powerful and easy-to-use Node.js library for creating interactive CLI prompts.


❯ Install

Install with npm:

Install with yarn:

Install Enquirer with NPM

(Requires Node.js 8.6 or higher. Please let us know if you need support for an earlier version by creating an issue.)


❯ Usage

Single prompt

The easiest way to get started with enquirer is to pass a question object to the prompt method.

(Examples with await need to be run inside an async function)

Multiple prompts

Pass an array of “question” objects to run a series of prompts.

Different ways to run enquirer

1. By importing the specific built-in prompt

2. By passing the options to prompt

Jump to: Getting Started · Prompts · Options · Key Bindings


❯ Enquirer

Enquirer is a prompt runner

Add Enquirer to your JavaScript project with following line of code.

The main export of this library is the Enquirer class, which has methods and features designed to simplify running prompts.

Prompts control how values are rendered and returned

Each individual prompt is a class with special features and functionality for rendering the types of values you want to show users in the terminal, and subsequently returning the types of values you need to use in your application.

How can I customize prompts?

Below in this guide you will find information about creating custom prompts. For now, we’ll focus on how to customize an existing prompt.

All of the individual prompt classes in this library are exposed as static properties on Enquirer. This allows them to be used directly without using enquirer.prompt().

Use this approach if you need to modify a prompt instance, or listen for events on the prompt.

Example

Enquirer

Create an instance of Enquirer.

Params

  • options {Object}: (optional) Options to use with all prompts.
  • answers {Object}: (optional) Answers object to initialize with.

Example

register()

Register a custom prompt type.

Params

  • type {String}
  • fn {Function|Prompt}: Prompt class, or a function that returns a Prompt class.
  • returns {Object}: Returns the Enquirer instance

Example

prompt()

Prompt function that takes a “question” object or array of question objects, and returns an object with responses from the user.

Params

  • questions {Array|Object}: Options objects for one or more prompts to run.
  • returns {Promise}: Promise that returns an “answers” object with the user’s responses.

Example

use()

Use an enquirer plugin.

Params

  • plugin {Function}: Plugin function that takes an instance of Enquirer.
  • returns {Object}: Returns the Enquirer instance.

Example

Enquirer#prompt

Prompt function that takes a “question” object or array of question objects, and returns an object with responses from the user.

Params

  • questions {Array|Object}: Options objects for one or more prompts to run.
  • returns {Promise}: Promise that returns an “answers” object with the user’s responses.

Example


❯ Prompts

This section is about Enquirer’s prompts: what they look like, how they work, how to run them, available options, and how to customize the prompts or create your own prompt concept.

Getting started with Enquirer’s prompts

Prompt

The base Prompt class is used to create all other prompts.

See the documentation for creating custom prompts to learn more about how this works.

Prompt Options

Each prompt takes an options object (aka “question” object), that implements the following interface:

Each property of the options object is described below:

Property Required? Type Description
type yes string\|function Enquirer uses this value to determine the type of prompt to run, but it’s optional when prompts are run directly.
name yes string\|function Used as the key for the answer on the returned values (answers) object.
message yes string\|function The message to display when the prompt is rendered in the terminal.
skip no boolean\|function If true it will not ask that prompt.
initial no string\|function The default value to return if the user does not supply a value.
format no function Function to format user input in the terminal.
result no function Function to format the final submitted value before it’s returned.
validate no function Function to validate the submitted value before it’s returned. This function may return a boolean or a string. If a string is returned it will be used as the validation error message.

Example usage


Built-in prompts

AutoComplete Prompt

Prompt that auto-completes as the user types, and returns the selected value as a string.

Enquirer AutoComplete Prompt

Example Usage

AutoComplete Options

Option Type Default Description
highlight function dim version of primary style The color to use when “highlighting” characters in the list that match user input.
multiple boolean false Allow multiple choices to be selected.
suggest function Greedy match, returns true if choice message contains input string. Function that filters choices. Takes user input and a choices array, and returns a list of matching choices.
initial number 0 Preselected item in the list of choices.
footer function None Function that displays footer text

Related prompts

↑ back to: Getting Started · Prompts


BasicAuth Prompt

Prompt that asks for username and password to authenticate the user. The default implementation of authenticate function in BasicAuth prompt is to compare the username and password with the values supplied while running the prompt. The implementer is expected to override the authenticate function with a custom logic such as making an API request to a server to authenticate the username and password entered and expect a token back.

Enquirer BasicAuth Prompt

Example Usage

↑ back to: Getting Started · Prompts


Confirm Prompt

Prompt that returns true or false.

Enquirer Confirm Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Form Prompt

Prompt that allows the user to enter and submit multiple values on a single terminal screen.

Enquirer Form Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Input Prompt

Prompt that takes user input and returns a string.

Enquirer Input Prompt

Example Usage

You can use data-store to store input history that the user can cycle through (see source).

Related prompts

↑ back to: Getting Started · Prompts


Invisible Prompt

Prompt that takes user input, hides it from the terminal, and returns a string.

Enquirer Invisible Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


List Prompt

Prompt that returns a list of values, created by splitting the user input. The default split character is , with optional trailing whitespace.

Enquirer List Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


MultiSelect Prompt

Prompt that allows the user to select multiple items from a list of options.

Enquirer MultiSelect Prompt

Example Usage

Example key-value pairs

Optionally, pass a result function and use the .map method to return an object of key-value pairs of the selected names and values: example

Related prompts

↑ back to: Getting Started · Prompts


Numeral Prompt

Prompt that takes a number as input.

Enquirer Numeral Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Password Prompt

Prompt that takes user input and masks it in the terminal. Also see the invisible prompt

Enquirer Password Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Quiz Prompt

Prompt that allows the user to play multiple-choice quiz questions.

Enquirer Quiz Prompt

Example Usage

Quiz Options

Option Type Required Description
choices array Yes The list of possible answers to the quiz question.
correctChoice number Yes Index of the correct choice from the choices array.

↑ back to: Getting Started · Prompts


Survey Prompt

Prompt that allows the user to provide feedback for a list of questions.

Enquirer Survey Prompt

Example Usage

Related prompts


Scale Prompt

A more compact version of the Survey prompt, the Scale prompt allows the user to quickly provide feedback using a Likert Scale.

Enquirer Scale Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Select Prompt

Prompt that allows the user to select from a list of options.

Enquirer Select Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Sort Prompt

Prompt that allows the user to sort items in a list.

Example

In this example, custom styling is applied to the returned values to make it easier to see what’s happening.

Enquirer Sort Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Snippet Prompt

Prompt that allows the user to replace placeholders in a snippet of code or text.

Prompts

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Toggle Prompt

Prompt that allows the user to toggle between two values then returns true or false.

Enquirer Toggle Prompt

Example Usage

Related prompts

↑ back to: Getting Started · Prompts


Prompt Types

There are 5 (soon to be 6!) type classes:

Each type is a low-level class that may be used as a starting point for creating higher level prompts. Continue reading to learn how.

ArrayPrompt

The ArrayPrompt class is used for creating prompts that display a list of choices in the terminal. For example, Enquirer uses this class as the basis for the Select and Survey prompts.

Options

In addition to the options available to all prompts, Array prompts also support the following options.

Option Required? Type Description
autofocus no string\|number The index or name of the choice that should have focus when the prompt loads. Only one choice may have focus at a time.
stdin no stream The input stream to use for emitting keypress events. Defaults to process.stdin.
stdout no stream The output stream to use for writing the prompt to the terminal. Defaults to process.stdout.

Properties

Array prompts have the following instance properties and getters.

Property name Type Description
choices array Array of choices that have been normalized from choices passed on the prompt options.
cursor number Position of the cursor relative to the user input (string).
enabled array Returns an array of enabled choices.
focused array Returns the currently selected choice in the visible list of choices. This is similar to the concept of focus in HTML and CSS. Focused choices are always visible (on-screen). When a list of choices is longer than the list of visible choices, and an off-screen choice is focused, the list will scroll to the focused choice and re-render.
focused Gets the currently selected choice. Equivalent to prompt.choices[prompt.index].
index number Position of the pointer in the visible list (array) of choices.
limit number The number of choices to display on-screen.
selected array Either a list of enabled choices (when options.multiple is true) or the currently focused choice.
visible string

Methods

Method Description
pointer() Returns the visual symbol to use to identify the choice that currently has focus. The symbol is often used for this. The pointer is not always visible, as with the autocomplete prompt.
indicator() Returns the visual symbol that indicates whether or not a choice is checked/enabled.
focus() Sets focus on a choice, if it can be focused.

Choices

Array prompts support the choices option, which is the array of choices users will be able to select from when rendered in the terminal.

Type: string|object

Example

Defining choices

Whether defined as a string or object, choices are normalized to the following interface:

Example

Normalizes to the following when the prompt is run:

Choice properties

The following properties are supported on choice objects.

Option Type Description
name string The unique key to identify a choice
message string The message to display in the terminal. name is used when this is undefined.
value string Value to associate with the choice. Useful for creating key-value pairs from user choices. name is used when this is undefined.
choices array Array of “child” choices.
hint string Help message to display next to a choice.
role string Determines how the choice will be displayed. Currently the only role supported is separator. Additional roles may be added in the future (like heading, etc). Please create a [feature request]
enabled boolean Enabled a choice by default. This is only supported when options.multiple is true or on prompts that support multiple choices, like MultiSelect.
disabled boolean\|string Disable a choice so that it cannot be selected. This value may either be true, false, or a message to display.
indicator string\|function Custom indicator to render for a choice (like a check or radio button).

AuthPrompt

The AuthPrompt is used to create prompts to log in user using any authentication method. For example, Enquirer uses this class as the basis for the BasicAuth Prompt. You can also find prompt examples in examples/auth/ folder that utilizes AuthPrompt to create OAuth based authentication prompt or a prompt that authenticates using time-based OTP, among others.

AuthPrompt has a factory function that creates an instance of AuthPrompt class and it expects an authenticate function, as an argument, which overrides the authenticate function of the AuthPrompt class.

Methods

Method Description
authenticate() Contain all the authentication logic. This function should be overridden to implement custom authentication logic. The default authenticate function throws an error if no other function is provided.

Choices

Auth prompt supports the choices option, which is the similar to the choices used in Form Prompt.

Example


BooleanPrompt

The BooleanPrompt class is used for creating prompts that display and return a boolean value.

Returns: boolean


NumberPrompt

The NumberPrompt class is used for creating prompts that display and return a numerical value.

Returns: string|number (number, or number formatted as a string)


StringPrompt

The StringPrompt class is used for creating prompts that display and return a string value.

Returns: string


❯ Custom prompts

With Enquirer 2.0, custom prompts are easier than ever to create and use.

How do I create a custom prompt?

Custom prompts are created by extending either:

  • Enquirer’s Prompt class
  • one of the built-in prompts, or
  • low-level types.

If you want to be able to specify your prompt by type so that it may be used alongside other prompts, you will need to first create an instance of Enquirer.

Then use the .register() method to add your custom prompt.

Now you can do the following when defining “questions”.


❯ Key Bindings

All prompts

These key combinations may be used with all prompts.

command description
ctrl + c Cancel the prompt.
ctrl + g Reset the prompt to its initial state.


Move cursor

These combinations may be used on prompts that support user input (eg. input prompt, password prompt, and invisible prompt).

command description
left Move the cursor back one character.
right Move the cursor forward one character.
ctrl + a Move cursor to the start of the line
ctrl + e Move cursor to the end of the line
ctrl + b Move cursor back one character
ctrl + f Move cursor forward one character
ctrl + x Toggle between first and cursor position


Edit Input

These key combinations may be used on prompts that support user input (eg. input prompt, password prompt, and invisible prompt).

command description
ctrl + a Move cursor to the start of the line
ctrl + e Move cursor to the end of the line
ctrl + b Move cursor back one character
ctrl + f Move cursor forward one character
ctrl + x Toggle between first and cursor position


command (Mac) command (Windows) description
delete backspace Delete one character to the left.
fn + delete delete Delete one character to the right.
option + up alt + up Scroll to the previous item in history (Input prompt only, when history is enabled).
option + down alt + down Scroll to the next item in history (Input prompt only, when history is enabled).

Select choices

These key combinations may be used on prompts that support multiple choices, such as the multiselect prompt, or the select prompt when the multiple options is true.

command description
space Toggle the currently selected choice when options.multiple is true.
number Move the pointer to the choice at the given index. Also toggles the selected choice when options.multiple is true.
a Toggle all choices to be enabled or disabled.
i Invert the current selection of choices.
g Toggle the current choice group.


Hide/show choices

command description
fn + up Decrease the number of visible choices by one.
fn + down Increase the number of visible choices by one.


Move/lock Pointer

command description
number Move the pointer to the choice at the given index. Also toggles the selected choice when options.multiple is true.
up Move the pointer up.
down Move the pointer down.
ctrl + a Move the pointer to the first visible choice.
ctrl + e Move the pointer to the last visible choice.
shift + up Scroll up one choice without changing pointer position (locks the pointer while scrolling).
shift + down Scroll down one choice without changing pointer position (locks the pointer while scrolling).


command (Mac) command (Windows) description
fn + left home Move the pointer to the first choice in the choices array.
fn + right end Move the pointer to the last choice in the choices array.


❯ Release History

Please see CHANGELOG.md.

❯ Performance

System specs

MacBook Pro, Intel Core i7, 2.5 GHz, 16 GB.

Load time

Time it takes for the module to load the first time (average of 3 runs):

enquirer: 4.013ms
inquirer: 286.717ms


❯ About

Contributing

Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.

Todo

We’re currently working on documentation for the following items. Please star and watch the repository for updates!

  • Customizing symbols
  • Customizing styles (palette)
  • Customizing rendered input
  • Customizing returned values
  • Customizing key bindings
  • Question validation
  • Choice validation
  • Skipping questions
  • Async choices
  • Async timers: loaders, spinners and other animations
  • Links to examples

Running Tests

Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:

Building docs

(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)

To generate the readme, run the following command:

Commits Contributor
283 jonschlinkert
82 doowb
32 rajat-sr
20 318097
15 g-plane
12 pixelass
5 adityavyas611
5 satotake
3 tunnckoCore
3 Ovyerus
3 sw-yx
2 DanielRuf
2 GabeL7r
1 AlCalzone
1 hipstersmoothie
1 danieldelcore
1 ImgBotApp
1 jsonkao
1 knpwrs
1 yeskunall
1 mischah
1 renarsvilnis
1 sbugert
1 stephencweiss
1 skellock
1 whxaxes

Author

Jon Schlinkert

Credit

Thanks to derhuerst, creator of prompt libraries such as prompt-skeleton, which influenced some of the concepts we used in our prompts.



Ajv: Another JSON Schema Validator

Build Status npm npm (beta) npm downloads Coverage Status Gitter GitHub Sponsors

Ajv v7 beta is released

Ajv version 7.0.0-beta.0 is released with these changes:

  • to reduce the mistakes in JSON schemas and unexpected validation results, strict mode is added - it prohibits ignored or ambiguous JSON Schema elements.
  • to make code injection from untrusted schemas impossible, code generation is fully re-written to be safe.
  • to simplify Ajv extensions, the new keyword API that is used by pre-defined keywords is available to user-defined keywords - it is much easier to define any keywords now, especially with subschemas.
  • schemas are compiled to ES6 code (ES5 code generation is supported with an option).
  • to improve reliability and maintainability the code is migrated to TypeScript.

Please note:

  • the support for JSON-Schema draft-04 is removed - if you have schemas using “id” attributes you have to replace them with “$id” (or continue using version 6 that will be supported until 02/28/2021).
  • all formats are separated to ajv-formats package - they have to be explicitely added if you use them.

See release notes for the details.

To install the new version:

See Getting started with v7 for code example.

Mozilla MOSS grant and OpenJS Foundation

   

Ajv also joined OpenJS Foundation – having this support will help ensure the longevity and stability of Ajv for all its users.

This blog post has more details.

I am looking for the long term maintainers of Ajv – working with ReadySet, also sponsored by Mozilla, to establish clear guidelines for the role of a “maintainer” and the contribution standards, and to encourage a wider, more inclusive, contribution from the community.

Please sponsor Ajv development

Since I asked to support Ajv development 40 people and 6 organizations contributed via GitHub and OpenCollective - this support helped receiving the MOSS grant!

Your continuing support is very important - the funds will be used to develop and maintain Ajv once the next major version is released.

Please sponsor Ajv via: - GitHub sponsors page (GitHub will match it) - Ajv Open Collective️

Thank you.

Open Collective sponsors

Using version 6

JSON Schema draft-07 is published.

Ajv version 6.0.0 that supports draft-07 is released. It may require either migrating your schemas or updating your code (to continue using draft-04 and v5 schemas, draft-06 schemas will be supported without changes).

Please note: To use Ajv with draft-06 schemas you need to explicitly add the meta-schema to the validator instance:

To use Ajv with draft-04 schemas in addition to explicitly adding meta-schema you also need to use option schemaId:

Contents

Performance

Ajv generates code using doT templates to turn JSON Schemas into super-fast validation functions that are efficient for v8 optimization.

Currently Ajv is the fastest and the most standard compliant validator according to these benchmarks:

Performance of different validators by json-schema-benchmark:

performance

Features

Install

npm install ajv

Getting started

Try it in the Node.js REPL: https://tonicdev.com/npm/ajv

The fastest validation call:

or with less code

or

See API and Options for more details.

Ajv compiles schemas to functions and caches them in all cases (using schema serialized with fast-json-stable-stringify or a custom function as a key), so that the next time the same schema is used (not necessarily the same object instance) it won’t be compiled again.

The best performance is achieved when using compiled functions returned by compile or getSchema methods (there is no additional function call).

Please note: every time a validation function or ajv.validate are called errors property is overwritten. You need to copy errors array reference to another variable if you want to use it later (e.g., in the callback). See Validation errors

Note for TypeScript users: ajv provides its own TypeScript declarations out of the box, so you don’t need to install the deprecated @types/ajv module.

Using in browser

You can require Ajv directly from the code you browserify - in this case Ajv will be a part of your bundle.

If you need to use Ajv in several bundles you can create a separate UMD bundle using npm run bundle script (thanks to siddo420).

Then you need to load Ajv in the browser:

This bundle can be used with different module systems; it creates global Ajv if no module system is found.

The browser bundle is available on cdnjs.

Ajv is tested with these browsers:

Sauce Test Status

Please note: some frameworks, e.g. Dojo, may redefine global require in such way that is not compatible with CommonJS module format. In such case Ajv bundle has to be loaded before the framework and then you can use global Ajv (see issue #234).

Ajv and Content Security Policies (CSP)

If you’re using Ajv to compile a schema (the typical use) in a browser document that is loaded with a Content Security Policy (CSP), that policy will require a script-src directive that includes the value 'unsafe-eval'. :warning: NOTE, however, that unsafe-eval is NOT recommended in a secure CSP[1], as it has the potential to open the document to cross-site scripting (XSS) attacks.

In order to make use of Ajv without easing your CSP, you can pre-compile a schema using the CLI. This will transpile the schema JSON into a JavaScript file that exports a validate function that works simlarly to a schema compiled at runtime.

Note that pre-compilation of schemas is performed using ajv-pack and there are some limitations to the schema features it can compile. A successfully pre-compiled schema is equivalent to the same schema compiled at runtime.

Command line interface

CLI is available as a separate npm package ajv-cli. It supports:

  • compiling JSON Schemas to test their validity
  • BETA: generating standalone module exporting a validation function to be used without Ajv (using ajv-pack)
  • migrate schemas to draft-07 (using json-schema-migrate)
  • validating data file(s) against JSON Schema
  • testing expected validity of data against JSON Schema
  • referenced schemas
  • custom meta-schemas
  • files in JSON, JSON5, YAML, and JavaScript format
  • all Ajv options
  • reporting changes in data after validation in JSON-patch format

Validation keywords

Ajv supports all validation keywords from draft-07 of JSON Schema standard:

With ajv-keywords package Ajv also supports validation keywords from JSON Schema extension proposals for JSON Schema standard:

See JSON Schema validation keywords for more details.

Annotation keywords

JSON Schema specification defines several annotation keywords that describe schema itself but do not perform any validation.

  • title and description: information about the data represented by that schema
  • $comment (NEW in draft-07): information for developers. With option $comment Ajv logs or passes the comment string to the user-supplied function. See Options.
  • default: a default value of the data instance, see Assigning defaults.
  • examples (NEW in draft-06): an array of data instances. Ajv does not check the validity of these instances against the schema.
  • readOnly and writeOnly (NEW in draft-07): marks data-instance as read-only or write-only in relation to the source of the data (database, api, etc.).
  • contentEncoding: RFC 2045, e.g., “base64”.
  • contentMediaType: RFC 2046, e.g., “image/png”.

Please note: Ajv does not implement validation of the keywords examples, contentEncoding and contentMediaType but it reserves them. If you want to create a plugin that implements some of them, it should remove these keywords from the instance.

Formats

Ajv implements formats defined by JSON Schema specification and several other formats. It is recommended NOT to use “format” keyword implementations with untrusted data, as they use potentially unsafe regular expressions - see ReDoS attack.

Please note: if you need to use “format” keyword to validate untrusted data, you MUST assess their suitability and safety for your validation scenarios.

The following formats are implemented for string validation with “format” keyword:

  • date: full-date according to RFC3339.
  • time: time with optional time-zone.
  • date-time: date-time from the same source (time-zone is mandatory). date, time and date-time validate ranges in full mode and only regexp in fast mode (see options).
  • uri: full URI.
  • uri-reference: URI reference, including full and relative URIs.
  • uri-template: URI template according to RFC6570
  • url (deprecated): URL record.
  • email: email address.
  • hostname: host name according to RFC1034.
  • ipv4: IP address v4.
  • ipv6: IP address v6.
  • regex: tests whether a string is a valid regular expression by passing it to RegExp constructor.
  • uuid: Universally Unique IDentifier according to RFC4122.
  • json-pointer: JSON-pointer according to RFC6901.
  • relative-json-pointer: relative JSON-pointer according to this draft.

Please note: JSON Schema draft-07 also defines formats iri, iri-reference, idn-hostname and idn-email for URLs, hostnames and emails with international characters. Ajv does not implement these formats. If you create Ajv plugin that implements them please make a PR to mention this plugin here.

There are two modes of format validation: fast and full. This mode affects formats date, time, date-time, uri, uri-reference, and email. See Options for details.

You can add additional formats and replace any of the formats above using addFormat method.

The option unknownFormats allows changing the default behaviour when an unknown format is encountered. In this case Ajv can either fail schema compilation (default) or ignore it (default in versions before 5.0.0). You also can allow specific format(s) that will be ignored. See Options for details.

You can find regular expressions used for format validation and the sources that were used in formats.js.

Combining schemas with $ref

You can structure your validation logic across multiple schema files and have schemas reference each other using $ref keyword.

Example:

Now to compile your schema you can either pass all schemas to Ajv instance:

or use addSchema method:

See Options and addSchema method.

Please note: - $ref is resolved as the uri-reference using schema $id as the base URI (see the example). - References can be recursive (and mutually recursive) to implement the schemas for different data structures (such as linked lists, trees, graphs, etc.). - You don’t have to host your schema files at the URIs that you use as schema $id. These URIs are only used to identify the schemas, and according to JSON Schema specification validators should not expect to be able to download the schemas from these URIs. - The actual location of the schema file in the file system is not used. - You can pass the identifier of the schema as the second parameter of addSchema method or as a property name in schemas option. This identifier can be used instead of (or in addition to) schema $id. - You cannot have the same $id (or the schema identifier) used for more than one schema - the exception will be thrown. - You can implement dynamic resolution of the referenced schemas using compileAsync method. In this way you can store schemas in any system (files, web, database, etc.) and reference them without explicitly adding to Ajv instance. See Asynchronous schema compilation.

$data reference

With $data option you can use values from the validated data as the values for the schema keywords. See proposal for more information about how it works.

$data reference is supported in the keywords: const, enum, format, maximum/minimum, exclusiveMaximum / exclusiveMinimum, maxLength / minLength, maxItems / minItems, maxProperties / minProperties, formatMaximum / formatMinimum, formatExclusiveMaximum / formatExclusiveMinimum, multipleOf, pattern, required, uniqueItems.

The value of “$data” should be a JSON-pointer to the data (the root is always the top level data object, even if the $data reference is inside a referenced subschema) or a relative JSON-pointer (it is relative to the current point in data; if the $data reference is inside a referenced subschema it cannot point to the data outside of the root level for this subschema).

Examples.

This schema requires that the value in property smaller is less or equal than the value in the property larger:

This schema requires that the properties have the same format as their field names:

$data reference is resolved safely - it won’t throw even if some property is undefined. If $data resolves to undefined the validation succeeds (with the exclusion of const keyword). If $data resolves to incorrect type (e.g. not “number” for maximum keyword) the validation fails.

$merge and $patch keywords

With the package ajv-merge-patch you can use the keywords $merge and $patch that allow extending JSON Schemas with patches using formats JSON Merge Patch (RFC 7396) and JSON Patch (RFC 6902).

To add keywords $merge and $patch to Ajv instance use this code:

Examples.

Using $merge:

Using $patch:

The schemas above are equivalent to this schema:

The properties source and with in the keywords $merge and $patch can use absolute or relative $ref to point to other schemas previously added to the Ajv instance or to the fragments of the current schema.

See the package ajv-merge-patch for more information.

Defining custom keywords

The advantages of using custom keywords are:

  • allow creating validation scenarios that cannot be expressed using JSON Schema
  • simplify your schemas
  • help bringing a bigger part of the validation logic to your schemas
  • make your schemas more expressive, less verbose and closer to your application domain
  • implement custom data processors that modify your data (modifying option MUST be used in keyword definition) and/or create side effects while the data is being validated

If a keyword is used only for side-effects and its validation result is pre-defined, use option valid: true/false in keyword definition to simplify both generated code (no error handling in case of valid: true) and your keyword functions (no need to return any validation result).

The concerns you have to be aware of when extending JSON Schema standard with custom keywords are the portability and understanding of your schemas. You will have to support these custom keywords on other platforms and to properly document these keywords so that everybody can understand them in your schemas.

You can define custom keywords with addKeyword method. Keywords are defined on the ajv instance level - new instances will not have previously defined keywords.

Ajv allows defining keywords with: - validation function - compilation function - macro function - inline compilation function that should return code (as string) that will be inlined in the currently compiled schema.

Example. range and exclusiveRange keywords using compiled schema:

Several custom keywords (typeof, instanceof, range and propertyNames) are defined in ajv-keywords package - they can be used for your schemas and as a starting point for your own custom keywords.

See Defining custom keywords for more details.

Asynchronous schema compilation

During asynchronous compilation remote references are loaded using supplied function. See compileAsync method and loadSchema option.

Example:

Please note: Option missingRefs should NOT be set to "ignore" or "fail" for asynchronous compilation to work.

Asynchronous validation

Example in Node.js REPL: https://tonicdev.com/esp/ajv-asynchronous-validation

You can define custom formats and keywords that perform validation asynchronously by accessing database or some other service. You should add async: true in the keyword or format definition (see addFormat, addKeyword and Defining custom keywords).

If your schema uses asynchronous formats/keywords or refers to some schema that contains them it should have "$async": true keyword so that Ajv can compile it correctly. If asynchronous format/keyword or reference to asynchronous schema is used in the schema without $async keyword Ajv will throw an exception during schema compilation.

Please note: all asynchronous subschemas that are referenced from the current or other schemas should have "$async": true keyword as well, otherwise the schema compilation will fail.

Validation function for an asynchronous custom format/keyword should return a promise that resolves with true or false (or rejects with new Ajv.ValidationError(errors) if you want to return custom errors from the keyword function).

Ajv compiles asynchronous schemas to es7 async functions that can optionally be transpiled with nodent. Async functions are supported in Node.js 7+ and all modern browsers. You can also supply any other transpiler as a function via processCode option. See Options.

The compiled validation function has $async: true property (if the schema is asynchronous), so you can differentiate these functions if you are using both synchronous and asynchronous schemas.

Validation result will be a promise that resolves with validated data or rejects with an exception Ajv.ValidationError that contains the array of validation errors in errors property.

Example:

Using transpilers with asynchronous validation functions.

ajv-async uses nodent to transpile async functions. To use another transpiler you should separately install it (or load its bundle in the browser).

Using nodent

Using other transpilers

See Options.

Security considerations

JSON Schema, if properly used, can replace data sanitisation. It doesn’t replace other API security considerations. It also introduces additional security aspects to consider.

Security contact

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerabilities via GitHub issues.

Untrusted schemas

Ajv treats JSON schemas as trusted as your application code. This security model is based on the most common use case, when the schemas are static and bundled together with the application.

If your schemas are received from untrusted sources (or generated from untrusted data) there are several scenarios you need to prevent: - compiling schemas can cause stack overflow (if they are too deep) - compiling schemas can be slow (e.g. #557) - validating certain data can be slow

It is difficult to predict all the scenarios, but at the very least it may help to limit the size of untrusted schemas (e.g. limit JSON string length) and also the maximum schema object depth (that can be high for relatively small JSON strings). You also may want to mitigate slow regular expressions in pattern and patternProperties keywords.

Regardless the measures you take, using untrusted schemas increases security risks.

Circular references in JavaScript objects

Ajv does not support schemas and validated data that have circular references in objects. See issue #802.

An attempt to compile such schemas or validate such data would cause stack overflow (or will not complete in case of asynchronous validation). Depending on the parser you use, untrusted data can lead to circular references.

Security risks of trusted schemas

Some keywords in JSON Schemas can lead to very slow validation for certain data. These keywords include (but may be not limited to):

  • pattern and format for large strings - in some cases using maxLength can help mitigate it, but certain regular expressions can lead to exponential validation time even with relatively short strings (see ReDoS attack).
  • patternProperties for large property names - use propertyNames to mitigate, but some regular expressions can have exponential evaluation time as well.
  • uniqueItems for large non-scalar arrays - use maxItems to mitigate

Please note: The suggestions above to prevent slow validation would only work if you do NOT use allErrors: true in production code (using it would continue validation after validation errors).

You can validate your JSON schemas against this meta-schema to check that these recommendations are followed:

Please note: following all these recommendation is not a guarantee that validation of untrusted data is safe - it can still lead to some undesirable results.

Content Security Policies (CSP)

See Ajv and Content Security Policies (CSP)

ReDoS attack

Certain regular expressions can lead to the exponential evaluation time even with relatively short strings.

Please assess the regular expressions you use in the schemas on their vulnerability to this attack - see safe-regex, for example.

Please note: some formats that Ajv implements use regular expressions that can be vulnerable to ReDoS attack, so if you use Ajv to validate data from untrusted sources it is strongly recommended to consider the following:

  • making assessment of “format” implementations in Ajv.
  • using format: 'fast' option that simplifies some of the regular expressions (although it does not guarantee that they are safe).
  • replacing format implementations provided by Ajv with your own implementations of “format” keyword that either uses different regular expressions or another approach to format validation. Please see addFormat method.
  • disabling format validation by ignoring “format” keyword with option format: false

Whatever mitigation you choose, please assume all formats provided by Ajv as potentially unsafe and make your own assessment of their suitability for your validation scenarios.

Filtering data

With option removeAdditional (added by andyscott) you can filter data during the validation.

This option modifies original data.

Example:

If removeAdditional option in the example above were "all" then both additional1 and additional2 properties would have been removed.

If the option were "failing" then property additional1 would have been removed regardless of its value and property additional2 would have been removed only if its value were failing the schema in the inner additionalProperties (so in the example above it would have stayed because it passes the schema, but any non-number would have been removed).

Please note: If you use removeAdditional option with additionalProperties keyword inside anyOf/oneOf keywords your validation can fail with this schema, for example:

The intention of the schema above is to allow objects with either the string property “foo” or the integer property “bar”, but not with both and not with any other properties.

With the option removeAdditional: true the validation will pass for the object { "foo": "abc"} but will fail for the object {"bar": 1}. It happens because while the first subschema in oneOf is validated, the property bar is removed because it is an additional property according to the standard (because it is not included in properties keyword in the same schema).

While this behaviour is unexpected (issues #129, #134), it is correct. To have the expected behaviour (both objects are allowed and additional properties are removed) the schema has to be refactored in this way:

The schema above is also more efficient - it will compile into a faster function.

Assigning defaults

With option useDefaults Ajv will assign values from default keyword in the schemas of properties and items (when it is the array of schemas) to the missing properties and items.

With the option value "empty" properties and items equal to null or "" (empty string) will be considered missing and assigned defaults.

This option modifies original data.

Please note: the default value is inserted in the generated validation code as a literal, so the value inserted in the data will be the deep clone of the default in the schema.

Example 1 (default in properties):

Example 2 (default in items):

default keywords in other cases are ignored:

  • not in properties or items subschemas
  • in schemas inside anyOf, oneOf and not (see #42)
  • in if subschema of switch keyword
  • in schemas generated by custom macro keywords

The strictDefaults option customizes Ajv’s behavior for the defaults that Ajv ignores (true raises an error, and "log" outputs a warning).

Coercing data types

When you are validating user inputs all your data properties are usually strings. The option coerceTypes allows you to have your data types coerced to the types specified in your schema type keywords, both to pass the validation and to use the correctly typed data afterwards.

This option modifies original data.

Please note: if you pass a scalar value to the validating function its type will be coerced and it will pass the validation, but the value of the variable you pass won’t be updated because scalars are passed by value.

Example 1:

Example 2 (array coercions):

The coercion rules, as you can see from the example, are different from JavaScript both to validate user input as expected and to have the coercion reversible (to correctly validate cases where different types are defined in subschemas of “anyOf” and other compound keywords).

See Coercion rules for details.

API

new Ajv(Object options) -> Object

Create Ajv instance.

.compile(Object schema) -> Function<Object data>

Generate validating function and cache the compiled schema for future use.

Validating function returns a boolean value. This function has properties errors and schema. Errors encountered during the last validation are assigned to errors property (it is assigned null if there was no errors). schema property contains the reference to the original schema.

The schema passed to this method will be validated against meta-schema unless validateSchema option is false. If schema is invalid, an error will be thrown. See options.

.compileAsync(Object schema [, Boolean meta] [, Function callback]) -> Promise

Asynchronous version of compile method that loads missing remote schemas using asynchronous function in options.loadSchema. This function returns a Promise that resolves to a validation function. An optional callback passed to compileAsync will be called with 2 parameters: error (or null) and validating function. The returned promise will reject (and the callback will be called with an error) when:

  • missing schema can’t be loaded (loadSchema returns a Promise that rejects).
  • a schema containing a missing reference is loaded, but the reference cannot be resolved.
  • schema (or some loaded/referenced schema) is invalid.

The function compiles schema and loads the first missing schema (or meta-schema) until all missing schemas are loaded.

You can asynchronously compile meta-schema by passing true as the second parameter.

See example in Asynchronous compilation.

.validate(Object schema|String key|String ref, data) -> Boolean

Validate data using passed schema (it will be compiled and cached).

Instead of the schema you can use the key that was previously passed to addSchema, the schema id if it was present in the schema or any previously resolved reference.

Validation errors will be available in the errors property of Ajv instance (null if there were no errors).

Please note: every time this method is called the errors are overwritten so you need to copy them to another variable if you want to use them later.

If the schema is asynchronous (has $async keyword on the top level) this method returns a Promise. See Asynchronous validation.

.addSchema(Array<Object>|Object schema [, String key]) -> Ajv

Add schema(s) to validator instance. This method does not compile schemas (but it still validates them). Because of that dependencies can be added in any order and circular dependencies are supported. It also prevents unnecessary compilation of schemas that are containers for other schemas but not used as a whole.

Array of schemas can be passed (schemas should have ids), the second parameter will be ignored.

Key can be passed that can be used to reference the schema and will be used as the schema id if there is no id inside the schema. If the key is not passed, the schema id will be used as the key.

Once the schema is added, it (and all the references inside it) can be referenced in other schemas and used to validate data.

Although addSchema does not compile schemas, explicit compilation is not required - the schema will be compiled when it is used first time.

By default the schema is validated against meta-schema before it is added, and if the schema does not pass validation the exception is thrown. This behaviour is controlled by validateSchema option.

Please note: Ajv uses the method chaining syntax for all methods with the prefix add* and remove*. This allows you to do nice things like the following.

.addMetaSchema(Array<Object>|Object schema [, String key]) -> Ajv

Adds meta schema(s) that can be used to validate other schemas. That function should be used instead of addSchema because there may be instance options that would compile a meta schema incorrectly (at the moment it is removeAdditional option).

There is no need to explicitly add draft-07 meta schema (http://json-schema.org/draft-07/schema) - it is added by default, unless option meta is set to false. You only need to use it if you have a changed meta-schema that you want to use to validate your schemas. See validateSchema.

.validateSchema(Object schema) -> Boolean

Validates schema. This method should be used to validate schemas rather than validate due to the inconsistency of uri format in JSON Schema standard.

By default this method is called automatically when the schema is added, so you rarely need to use it directly.

If schema doesn’t have $schema property, it is validated against draft 6 meta-schema (option meta should not be false).

If schema has $schema property, then the schema with this id (that should be previously added) is used to validate passed schema.

Errors will be available at ajv.errors.

.getSchema(String key) -> Function<Object data>

Retrieve compiled schema previously added with addSchema by the key passed to addSchema or by its full reference (id). The returned validating function has schema property with the reference to the original schema.

.removeSchema([Object schema|String key|String ref|RegExp pattern]) -> Ajv

Remove added/cached schema. Even if schema is referenced by other schemas it can be safely removed as dependent schemas have local references.

Schema can be removed using: - key passed to addSchema - it’s full reference (id) - RegExp that should match schema id or key (meta-schemas won’t be removed) - actual schema object that will be stable-stringified to remove schema from cache

If no parameter is passed all schemas but meta-schemas will be removed and the cache will be cleared.

.addFormat(String name, String|RegExp|Function|Object format) -> Ajv

Add custom format to validate strings or numbers. It can also be used to replace pre-defined formats for Ajv instance.

Strings are converted to RegExp.

Function should return validation result as true or false.

If object is passed it should have properties validate, compare and async:

  • validate: a string, RegExp or a function as described above.
  • compare: an optional comparison function that accepts two strings and compares them according to the format meaning. This function is used with keywords formatMaximum/formatMinimum (defined in ajv-keywords package). It should return 1 if the first value is bigger than the second value, -1 if it is smaller and 0 if it is equal.
  • async: an optional true value if validate is an asynchronous function; in this case it should return a promise that resolves with a value true or false.
  • type: an optional type of data that the format applies to. It can be "string" (default) or "number" (see https://github.com/ajv-validator/ajv/issues/291#issuecomment-259923858). If the type of data is different, the validation will pass.

Custom formats can be also added via formats option.

.addKeyword(String keyword, Object definition) -> Ajv

Add custom validation keyword to Ajv instance.

Keyword should be different from all standard JSON Schema keywords and different from previously defined keywords. There is no way to redefine keywords or to remove keyword definition from the instance.

Keyword must start with a letter, _ or $, and may continue with letters, numbers, _, $, or -. It is recommended to use an application-specific prefix for keywords to avoid current and future name collisions.

Example Keywords: - "xyz-example": valid, and uses prefix for the xyz project to avoid name collisions. - "example": valid, but not recommended as it could collide with future versions of JSON Schema etc. - "3-example": invalid as numbers are not allowed to be the first character in a keyword

Keyword definition is an object with the following properties:

  • type: optional string or array of strings with data type(s) that the keyword applies to. If not present, the keyword will apply to all types.
  • validate: validating function
  • compile: compiling function
  • macro: macro function
  • inline: compiling function that returns code (as string)
  • schema: an optional false value used with “validate” keyword to not pass schema
  • metaSchema: an optional meta-schema for keyword schema
  • dependencies: an optional list of properties that must be present in the parent schema - it will be checked during schema compilation
  • modifying: true MUST be passed if keyword modifies data
  • statements: true can be passed in case inline keyword generates statements (as opposed to expression)
  • valid: pass true/false to pre-define validation result, the result returned from validation function will be ignored. This option cannot be used with macro keywords.
  • async: an optional true value if the validation function is asynchronous (whether it is compiled or passed in validate property); in this case it should return a promise that resolves with a value true or false. This option is ignored in case of “macro” and “inline” keywords.
  • errors: an optional boolean or string "full" indicating whether keyword returns errors. If this property is not set Ajv will determine if the errors were set in case of failed validation.

compile, macro and inline are mutually exclusive, only one should be used at a time. validate can be used separately or in addition to them to support $data reference.

Please note: If the keyword is validating data type that is different from the type(s) in its definition, the validation function will not be called (and expanded macro will not be used), so there is no need to check for data type inside validation function or inside schema returned by macro function (unless you want to enforce a specific type and for some reason do not want to use a separate type keyword for that). In the same way as standard keywords work, if the keyword does not apply to the data type being validated, the validation of this keyword will succeed.

See Defining custom keywords for more details.

.getKeyword(String keyword) -> Object|Boolean

Returns custom keyword definition, true for pre-defined keywords and false if the keyword is unknown.

.removeKeyword(String keyword) -> Ajv

Removes custom or pre-defined keyword so you can redefine them.

While this method can be used to extend pre-defined keywords, it can also be used to completely change their meaning - it may lead to unexpected results.

Please note: schemas compiled before the keyword is removed will continue to work without changes. To recompile schemas use removeSchema method and compile them again.

.errorsText([Array<Object> errors [, Object options]]) -> String

Returns the text with all errors in a String.

Options can have properties separator (string used to separate errors, “,” by default) and dataVar (the variable name that dataPaths are prefixed with, “data” by default).

Options

Defaults:

Validation and reporting options
  • _data:support[data references](#data-reference). Draft 6 meta-schema that is added by default will be extended to allow them. If you want to use another meta-schema you need to use $dataMetaSchema method to add support for $data reference. See API.
  • allErrors: check all rules collecting all errors. Default is to return after the first error.
  • verbose: include the reference to the part of the schema (schema and parentSchema) and validated data in errors (false by default).
  • _$comment_ (NEW in Ajv version 6.0): log or pass the value of `$commentkeyword to a function. Option values: -false(default): ignore $comment keyword. -true`: log the keyword value to console.
    • function: pass the keyword value, its schema path and root schema to the specified function
  • jsonPointers: set dataPath property of errors using JSON Pointers instead of JavaScript property access notation.
  • uniqueItems: validate uniqueItems keyword (true by default).
  • unicode: calculate correct length of strings with unicode pairs (true by default). Pass false to use .length of strings that is faster, but gives “incorrect” lengths of strings with unicode pairs - each unicode pair is counted as two characters.
  • nullable: support keyword “nullable” from Open API 3 specification.
  • format: formats validation mode. Option values:
    • "fast" (default) - simplified and fast validation (see Formats for details of which formats are available and affected by this option).
    • "full" - more restrictive and slow validation. E.g., 25:00:00 and 2015/14/33 will be invalid time and date in ‘full’ mode but it will be valid in ‘fast’ mode.
    • false - ignore all format keywords.
  • formats: an object with custom formats. Keys and values will be passed to addFormat method.
  • keywords: an object with custom keywords. Keys and values will be passed to addKeyword method.
  • unknownFormats: handling of unknown formats. Option values:
    • true (default) - if an unknown format is encountered the exception is thrown during schema compilation. If format keyword value is $data reference and it is unknown the validation will fail.
    • [String] - an array of unknown format names that will be ignored. This option can be used to allow usage of third party schemas with format(s) for which you don’t have definitions, but still fail if another unknown format is used. If format keyword value is $data reference and it is not in this array the validation will fail.
    • "ignore" - to log warning during schema compilation and always pass validation (the default behaviour in versions before 5.0.0). This option is not recommended, as it allows to mistype format name and it won’t be validated without any error message. This behaviour is required by JSON Schema specification.
  • schemas: an array or object of schemas that will be added to the instance. In case you pass the array the schemas must have IDs in them. When the object is passed the method addSchema(value, key) will be called for each schema in this object.
  • logger: sets the logging method. Default is the global console object that should have methods log, warn and error. See Error logging. Option values:
    • custom logger - it should have methods log, warn and error. If any of these methods is missing an exception will be thrown.
    • false - logging is disabled.
Referenced schema options
  • schemaId: this option defines which keywords are used as schema URI. Option value:
    • "$id" (default) - only use $id keyword as schema URI (as specified in JSON Schema draft-06/07), ignore id keyword (if it is present a warning will be logged).
    • "id" - only use id keyword as schema URI (as specified in JSON Schema draft-04), ignore $id keyword (if it is present a warning will be logged).
    • "auto" - use both $id and id keywords as schema URI. If both are present (in the same schema object) and different the exception will be thrown during schema compilation.
  • missingRefs: handling of missing referenced schemas. Option values:
    • true (default) - if the reference cannot be resolved during compilation the exception is thrown. The thrown error has properties missingRef (with hash fragment) and missingSchema (without it). Both properties are resolved relative to the current base id (usually schema id, unless it was substituted).
    • "ignore" - to log error during compilation and always pass validation.
    • "fail" - to log error and successfully compile schema but fail validation if this rule is checked.
  • extendRefs: validation of other keywords when $ref is present in the schema. Option values:
    • "ignore" (default) - when $ref is used other keywords are ignored (as per JSON Reference standard). A warning will be logged during the schema compilation.
    • "fail" (recommended) - if other validation keywords are used together with $ref the exception will be thrown when the schema is compiled. This option is recommended to make sure schema has no keywords that are ignored, which can be confusing.
    • true - validate all keywords in the schemas with $ref (the default behaviour in versions before 5.0.0).
  • loadSchema: asynchronous function that will be used to load remote schemas when compileAsync method is used and some reference is missing (option missingRefs should NOT be ‘fail’ or ‘ignore’). This function should accept remote schema uri as a parameter and return a Promise that resolves to a schema. See example in Asynchronous compilation.
Options to modify validated data
  • removeAdditional: remove additional properties - see example in Filtering data. This option is not used if schema is added with addMetaSchema method. Option values:
    • false (default) - not to remove additional properties
    • "all" - all additional properties are removed, regardless of additionalProperties keyword in schema (and no validation is made for them).
    • true - only additional properties with additionalProperties keyword equal to false are removed.
    • "failing" - additional properties that fail schema validation will be removed (where additionalProperties keyword is false or schema).
  • useDefaults: replace missing or undefined properties and items with the values from corresponding default keywords. Default behaviour is to ignore default keywords. This option is not used if schema is added with addMetaSchema method. See examples in Assigning defaults. Option values:
    • false (default) - do not use defaults
    • true - insert defaults by value (object literal is used).
    • "empty" - in addition to missing or undefined, use defaults for properties and items that are equal to null or "" (an empty string).
    • "shared" (deprecated) - insert defaults by reference. If the default is an object, it will be shared by all instances of validated data. If you modify the inserted default in the validated data, it will be modified in the schema as well.
  • coerceTypes: change data type of data to match type keyword. See the example in Coercing data types and coercion rules. Option values:
    • false (default) - no type coercion.
    • true - coerce scalar data types.
    • "array" - in addition to coercions between scalar types, coerce scalar data to an array with one element and vice versa (as required by the schema).
Strict mode options
  • strictDefaults: report ignored default keywords in schemas. Option values:
    • false (default) - ignored defaults are not reported
    • true - if an ignored default is present, throw an error
    • "log" - if an ignored default is present, log warning
  • strictKeywords: report unknown keywords in schemas. Option values:
    • false (default) - unknown keywords are not reported
    • true - if an unknown keyword is present, throw an error
    • "log" - if an unknown keyword is present, log warning
  • strictNumbers: validate numbers strictly, failing validation for NaN and Infinity. Option values:
    • false (default) - NaN or Infinity will pass validation for numeric types
    • true - NaN or Infinity will not pass validation for numeric types
Asynchronous validation options
  • transpile: Requires ajv-async package. It determines whether Ajv transpiles compiled asynchronous validation function. Option values:
    • undefined (default) - transpile with nodent if async functions are not supported.
    • true - always transpile with nodent.
    • false - do not transpile; if async functions are not supported an exception will be thrown.
Advanced options
  • meta: add meta-schema so it can be used by other schemas (true by default). If an object is passed, it will be used as the default meta-schema for schemas that have no $schema keyword. This default meta-schema MUST have $schema keyword.
  • validateSchema: validate added/compiled schemas against meta-schema (true by default). $schema property in the schema can be http://json-schema.org/draft-07/schema or absent (draft-07 meta-schema will be used) or can be a reference to the schema previously added with addMetaSchema method. Option values:
    • true (default) - if the validation fails, throw the exception.
    • "log" - if the validation fails, log error.
    • false - skip schema validation.
  • addUsedSchema: by default methods compile and validate add schemas to the instance if they have $id (or id) property that doesn’t start with “#”. If $id is present and it is not unique the exception will be thrown. Set this option to false to skip adding schemas to the instance and the $id uniqueness check when these methods are used. This option does not affect addSchema method.
  • inlineRefs: Affects compilation of referenced schemas. Option values:
    • true (default) - the referenced schemas that don’t have refs in them are inlined, regardless of their size - that substantially improves performance at the cost of the bigger size of compiled schema functions.
    • false - to not inline referenced schemas (they will be compiled as separate functions).
    • integer number - to limit the maximum number of keywords of the schema that will be inlined.
  • passContext: pass validation context to custom keyword functions. If this option is true and you pass some context to the compiled validation function with validate.call(context, data), the context will be available as this in your custom keywords. By default this is Ajv instance.
  • loopRequired: by default required keyword is compiled into a single expression (or a sequence of statements in allErrors mode). In case of a very large number of properties in this keyword it may result in a very big validation function. Pass integer to set the number of properties above which required keyword will be validated in a loop - smaller validation function size but also worse performance.
  • ownProperties: by default Ajv iterates over all enumerable object properties; when this option is true only own enumerable object properties (i.e. found directly on the object rather than on its prototype) are iterated. Contributed by @mbroadst.
  • multipleOfPrecision: by default multipleOf keyword is validated by comparing the result of division with parseInt() of that result. It works for dividers that are bigger than 1. For small dividers such as 0.01 the result of the division is usually not integer (even when it should be integer, see issue #84). If you need to use fractional dividers set this option to some positive integer N to have multipleOf validated using this formula: Math.abs(Math.round(division) - division) < 1e-N (it is slower but allows for float arithmetics deviations).
  • errorDataPath (deprecated): set dataPath to point to ‘object’ (default) or to ‘property’ when validating keywords required, additionalProperties and dependencies.
  • messages: Include human-readable messages in errors. true by default. false can be passed when custom messages are used (e.g. with ajv-i18n).
  • sourceCode: add sourceCode property to validating function (for debugging; this code can be different from the result of toString call).
  • processCode: an optional function to process generated code before it is passed to Function constructor. It can be used to either beautify (the validating function is generated without line-breaks) or to transpile code. Starting from version 5.0.0 this option replaced options:
    • beautify that formatted the generated function using js-beautify. If you want to beautify the generated code pass a function calling require('js-beautify').js_beautify as processCode: code => js_beautify(code).
    • transpile that transpiled asynchronous validation function. You can still use transpile option with ajv-async package. See Asynchronous validation for more information.
  • cache: an optional instance of cache to store compiled schemas using stable-stringified schema as a key. For example, set-associative cache sacjs can be used. If not passed then a simple hash is used which is good enough for the common use case (a limited number of statically defined schemas). Cache should have methods put(key, value), get(key), del(key) and clear().
  • serialize: an optional function to serialize schema to cache key. Pass false to use schema itself as a key (e.g., if WeakMap used as a cache). By default fast-json-stable-stringify is used.

Validation errors

In case of validation failure, Ajv assigns the array of errors to errors property of validation function (or to errors property of Ajv instance when validate or validateSchema methods were called). In case of asynchronous validation, the returned promise is rejected with exception Ajv.ValidationError that has errors property.

Error objects

Each error is an object with the following properties:

  • keyword: validation keyword.
  • dataPath: the path to the part of the data that was validated. By default dataPath uses JavaScript property access notation (e.g., ".prop[1].subProp"). When the option jsonPointers is true (see Options) dataPath will be set using JSON pointer standard (e.g., "/prop/1/subProp").
  • schemaPath: the path (JSON-pointer as a URI fragment) to the schema of the keyword that failed validation.
  • params: the object with the additional information about error that can be used to create custom error messages (e.g., using ajv-i18n package). See below for parameters set by all keywords.
  • message: the standard error message (can be excluded with option messages set to false).
  • schema: the schema of the keyword (added with verbose option).
  • parentSchema: the schema containing the keyword (added with verbose option)
  • data: the data validated by the keyword (added with verbose option).

Please note: propertyNames keyword schema validation errors have an additional property propertyName, dataPath points to the object. After schema validation for each property name, if it is invalid an additional error is added with the property keyword equal to "propertyNames".

Error parameters

Properties of params object in errors depend on the keyword that failed validation.

  • maxItems, minItems, maxLength, minLength, maxProperties, minProperties - property limit (number, the schema of the keyword).
  • additionalItems - property limit (the maximum number of allowed items in case when items keyword is an array of schemas and additionalItems is false).
  • additionalProperties - property additionalProperty (the property not used in properties and patternProperties keywords).
  • dependencies - properties:
    • property (dependent property),
    • missingProperty (required missing dependency - only the first one is reported currently)
    • deps (required dependencies, comma separated list as a string),
    • depsCount (the number of required dependencies).
  • format - property format (the schema of the keyword).
  • maximum, minimum - properties:
    • limit (number, the schema of the keyword),
    • exclusive (boolean, the schema of exclusiveMaximum or exclusiveMinimum),
    • comparison (string, comparison operation to compare the data to the limit, with the data on the left and the limit on the right; can be “<”, “<=”, “>”, “>=”)
  • multipleOf - property multipleOf (the schema of the keyword)
  • pattern - property pattern (the schema of the keyword)
  • required - property missingProperty (required property that is missing).
  • propertyNames - property propertyName (an invalid property name).
  • patternRequired (in ajv-keywords) - property missingPattern (required pattern that did not match any property).
  • type - property type (required type(s), a string, can be a comma-separated list)
  • uniqueItems - properties i and j (indices of duplicate items).
  • const - property allowedValue pointing to the value (the schema of the keyword).
  • enum - property allowedValues pointing to the array of values (the schema of the keyword).
  • $ref - property ref with the referenced schema URI.
  • oneOf - property passingSchemas (array of indices of passing schemas, null if no schema passes).
  • custom keywords (in case keyword definition doesn’t create errors) - property keyword (the keyword name).

Error logging

Using the logger option when initiallizing Ajv will allow you to define custom logging. Here you can build upon the exisiting logging. The use of other logging packages is supported as long as the package or its associated wrapper exposes the required methods. If any of the required methods are missing an exception will be thrown. - Required Methods: log, warn, error

Plugins

Ajv can be extended with plugins that add custom keywords, formats or functions to process generated code. When such plugin is published as npm package it is recommended that it follows these conventions:

  • it exports a function
  • this function accepts ajv instance as the first parameter and returns the same instance to allow chaining
  • this function can accept an optional configuration as the second parameter

If you have published a useful plugin please submit a PR to add it to the next section.

  • ajv-async - plugin to configure async validation mode
  • ajv-bsontype - plugin to validate mongodb’s bsonType formats
  • ajv-cli - command line interface
  • ajv-errors - plugin for custom error messages
  • ajv-i18n - internationalised error messages
  • ajv-istanbul - plugin to instrument generated validation code to measure test coverage of your schemas
  • ajv-keywords - plugin with custom validation keywords (select, typeof, etc.)
  • ajv-merge-patch - plugin with keywords $merge and $patch
  • ajv-pack - produces a compact module exporting validation functions
  • ajv-formats-draft2019 - format validators for draft2019 that aren’t already included in ajv (ie. idn-hostname, idn-email, iri, iri-reference and duration).

Some packages using Ajv

  • webpack - a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser
  • jsonscript-js - the interpreter for JSONScript - scripted processing of existing endpoints and services
  • osprey-method-handler - Express middleware for validating requests and responses based on a RAML method object, used in osprey - validating API proxy generated from a RAML definition
  • har-validator - HTTP Archive (HAR) validator
  • jsoneditor - a web-based tool to view, edit, format, and validate JSON http://jsoneditoronline.org
  • JSON Schema Lint - a web tool to validate JSON/YAML document against a single JSON Schema http://jsonschemalint.com
  • objection - SQL-friendly ORM for Node.js
  • table - formats data into a string table
  • ripple-lib - a JavaScript API for interacting with Ripple in Node.js and the browser
  • restbase - distributed storage with REST API & dispatcher for backend services built to provide a low-latency & high-throughput API for Wikipedia / Wikimedia content
  • hippie-swagger - Hippie wrapper that provides end to end API testing with swagger validation
  • react-form-controlled - React controlled form components with validation
  • rabbitmq-schema - a schema definition module for RabbitMQ graphs and messages
  • [@query/schema](https://www.npmjs.com/package/@query/schema) - stream filtering with a URI-safe query syntax parsing to JSON Schema
  • chai-ajv-json-schema - chai plugin to us JSON Schema with expect in mocha tests
  • grunt-jsonschema-ajv - Grunt plugin for validating files against JSON Schema
  • extract-text-webpack-plugin - extract text from bundle into a file
  • electron-builder - a solution to package and build a ready for distribution Electron app
  • addons-linter - Mozilla Add-ons Linter
  • gh-pages-generator - multi-page site generator converting markdown files to GitHub pages
  • ESLint - the pluggable linting utility for JavaScript and JSX

Tests

npm install
git submodule update --init
npm test

Contributing

All validation functions are generated using doT templates in dot folder. Templates are precompiled so doT is not a run-time dependency.

npm run build - compiles templates to dotjs folder.

npm run watch - automatically compiles templates when files in dot folder change

Please see Contributing guidelines

Changes history

See https://github.com/ajv-validator/ajv/releases

Please note: Changes in version 7.0.0-beta

Version 6.0.0.

Code of conduct

Please review and follow the Code of conduct.

Please report any unacceptable behaviour to ajv.validator@gmail.com - it will be reviewed by the project team.



core-js

Sponsors on Open Collective Backers on Open Collective Gitter version npm downloads Build Status devDependency status ## As advertising: the author is looking for a good job :)

core-js@3, babel and a look into the future

Raising funds

core-js isn’t backed by a company, so the future of this project depends on you. Become a sponsor or a backer on Open Collective or on Patreon if you are interested in core-js.




It’s documentation for obsolete core-js@2. If you looking documentation for actual core-js version, please, check this branch.

Modular standard library for JavaScript. Includes polyfills for ECMAScript 5, ECMAScript 6: promises, symbols, collections, iterators, typed arrays, ECMAScript 7+ proposals, setImmediate, etc. Some additional features such as dictionaries or extended partial application. You can require only needed features or use it without global namespace pollution.

Example:

Without global namespace pollution:

Index

Usage

Basic

npm i core-js
bower install core.js

If you need complete build for browser, use builds from core-js/client path:

Warning: if you use core-js with the extension of native objects, require all needed core-js modules at the beginning of entry point of your application, otherwise, conflicts may occur.

CommonJS

You can require only needed modules.

Available entry points for methods / constructors, as above examples, and namespaces: for example, core-js/es6/array (core-js/library/es6/array) contains all ES6 Array features, core-js/es6 (core-js/library/es6) contains all ES6 features.

Caveats when using CommonJS API:
  • modules path is internal API, does not inject all required dependencies and can be changed in minor or patch releases. Use it only for a custom build and / or if you know what are you doing.
  • core-js is extremely modular and uses a lot of very tiny modules, because of that for usage in browsers bundle up core-js instead of usage loader for each file, otherwise, you will have hundreds of requests.

CommonJS and prototype methods without global namespace pollution

In the library version, we can’t pollute prototypes of native constructors. Because of that, prototype methods transformed to static methods like in examples above. babel runtime transformer also can’t transform them. But with transpilers we can use one more trick - bind operator and virtual methods. Special for that, available /virtual/ entry points. Example:

Custom build (from the command-line)

npm i core-js && cd node_modules/core-js && npm i
npm run grunt build:core.dict,es6 -- --blacklist=es6.promise,es6.math --library=on --path=custom uglify

Where core.dict and es6 are modules (namespaces) names, which will be added to the build, es6.promise and es6.math are modules (namespaces) names, which will be excluded from the build, --library=on is flag for build without global namespace pollution and custom is target file name.

Available namespaces: for example, es6.array contains ES6 Array features, es6 contains all modules whose names start with es6.

Custom build (from external scripts)

core-js-builder package exports a function that takes the same parameters as the build target from the previous section. This will conditionally include or exclude certain parts of core-js:

Tested in: - Chrome 26+ - Firefox 4+ - Safari 5+ - Opera 12+ - Internet Explorer 6+ (sure, IE8- with ES3 limitations) - Edge - Android Browser 2.3+ - iOS Safari 5.1+ - PhantomJS 1.9 / 2.1 - NodeJS 0.8+

…and it doesn’t mean core-js will not work in other engines, they just have not been tested.

Features:

CommonJS entry points:

core-js(/library)       <- all features
core-js(/library)/shim  <- only polyfills

ECMAScript 5

All features moved to the es6 namespace, here just a list of features:

CommonJS entry points:

core-js(/library)/es5

ECMAScript 6

CommonJS entry points:

core-js(/library)/es6

ECMAScript 6: Object

Modules es6.object.assign, es6.object.is, es6.object.set-prototype-of and es6.object.to-string.

In ES6 most Object static methods should work with primitives. Modules es6.object.freeze, es6.object.seal, es6.object.prevent-extensions, es6.object.is-frozen, es6.object.is-sealed, es6.object.is-extensible, es6.object.get-own-property-descriptor, es6.object.get-prototype-of, es6.object.keys and es6.object.get-own-property-names.

Just ES5 features: es6.object.create, es6.object.define-property and es6.object.define-properties.

CommonJS entry points:

core-js(/library)/es6/object
core-js(/library)/fn/object/assign
core-js(/library)/fn/object/is
core-js(/library)/fn/object/set-prototype-of
core-js(/library)/fn/object/get-prototype-of
core-js(/library)/fn/object/create
core-js(/library)/fn/object/define-property
core-js(/library)/fn/object/define-properties
core-js(/library)/fn/object/get-own-property-descriptor
core-js(/library)/fn/object/keys
core-js(/library)/fn/object/get-own-property-names
core-js(/library)/fn/object/freeze
core-js(/library)/fn/object/seal
core-js(/library)/fn/object/prevent-extensions
core-js(/library)/fn/object/is-frozen
core-js(/library)/fn/object/is-sealed
core-js(/library)/fn/object/is-extensible
core-js/fn/object/to-string

Examples:

ECMAScript 6: Function

Modules es6.function.name, es6.function.has-instance. Just ES5: es6.function.bind.

CommonJS entry points:

core-js/es6/function
core-js/fn/function/name
core-js/fn/function/has-instance
core-js/fn/function/bind
core-js/fn/function/virtual/bind

Example:

ECMAScript 6: Array

Modules es6.array.from, es6.array.of, es6.array.copy-within, es6.array.fill, es6.array.find, es6.array.find-index, es6.array.iterator. ES5 features with fixes: es6.array.is-array, es6.array.slice, es6.array.join, es6.array.index-of, es6.array.last-index-of, es6.array.every, es6.array.some, es6.array.for-each, es6.array.map, es6.array.filter, es6.array.reduce, es6.array.reduce-right, es6.array.sort.

CommonJS entry points:

core-js(/library)/es6/array
core-js(/library)/fn/array/from
core-js(/library)/fn/array/of
core-js(/library)/fn/array/is-array
core-js(/library)/fn/array/iterator
core-js(/library)/fn/array/copy-within
core-js(/library)/fn/array/fill
core-js(/library)/fn/array/find
core-js(/library)/fn/array/find-index
core-js(/library)/fn/array/values
core-js(/library)/fn/array/keys
core-js(/library)/fn/array/entries
core-js(/library)/fn/array/slice
core-js(/library)/fn/array/join
core-js(/library)/fn/array/index-of
core-js(/library)/fn/array/last-index-of
core-js(/library)/fn/array/every
core-js(/library)/fn/array/some
core-js(/library)/fn/array/for-each
core-js(/library)/fn/array/map
core-js(/library)/fn/array/filter
core-js(/library)/fn/array/reduce
core-js(/library)/fn/array/reduce-right
core-js(/library)/fn/array/sort
core-js(/library)/fn/array/virtual/iterator
core-js(/library)/fn/array/virtual/copy-within
core-js(/library)/fn/array/virtual/fill
core-js(/library)/fn/array/virtual/find
core-js(/library)/fn/array/virtual/find-index
core-js(/library)/fn/array/virtual/values
core-js(/library)/fn/array/virtual/keys
core-js(/library)/fn/array/virtual/entries
core-js(/library)/fn/array/virtual/slice
core-js(/library)/fn/array/virtual/join
core-js(/library)/fn/array/virtual/index-of
core-js(/library)/fn/array/virtual/last-index-of
core-js(/library)/fn/array/virtual/every
core-js(/library)/fn/array/virtual/some
core-js(/library)/fn/array/virtual/for-each
core-js(/library)/fn/array/virtual/map
core-js(/library)/fn/array/virtual/filter
core-js(/library)/fn/array/virtual/reduce
core-js(/library)/fn/array/virtual/reduce-right
core-js(/library)/fn/array/virtual/sort

Examples:

ECMAScript 6: String

Modules es6.string.from-code-point, es6.string.raw, es6.string.iterator, es6.string.code-point-at, es6.string.ends-with, es6.string.includes, es6.string.repeat, es6.string.starts-with and es6.string.trim.

Annex B HTML methods. Ugly, but it’s also the part of the spec. Modules es6.string.anchor, es6.string.big, es6.string.blink, es6.string.bold, es6.string.fixed, es6.string.fontcolor, es6.string.fontsize, es6.string.italics, es6.string.link, es6.string.small, es6.string.strike, es6.string.sub and es6.string.sup.

CommonJS entry points:

core-js(/library)/es6/string
core-js(/library)/fn/string/from-code-point
core-js(/library)/fn/string/raw
core-js(/library)/fn/string/includes
core-js(/library)/fn/string/starts-with
core-js(/library)/fn/string/ends-with
core-js(/library)/fn/string/repeat
core-js(/library)/fn/string/code-point-at
core-js(/library)/fn/string/trim
core-js(/library)/fn/string/anchor
core-js(/library)/fn/string/big
core-js(/library)/fn/string/blink
core-js(/library)/fn/string/bold
core-js(/library)/fn/string/fixed
core-js(/library)/fn/string/fontcolor
core-js(/library)/fn/string/fontsize
core-js(/library)/fn/string/italics
core-js(/library)/fn/string/link
core-js(/library)/fn/string/small
core-js(/library)/fn/string/strike
core-js(/library)/fn/string/sub
core-js(/library)/fn/string/sup
core-js(/library)/fn/string/iterator
core-js(/library)/fn/string/virtual/includes
core-js(/library)/fn/string/virtual/starts-with
core-js(/library)/fn/string/virtual/ends-with
core-js(/library)/fn/string/virtual/repeat
core-js(/library)/fn/string/virtual/code-point-at
core-js(/library)/fn/string/virtual/trim
core-js(/library)/fn/string/virtual/anchor
core-js(/library)/fn/string/virtual/big
core-js(/library)/fn/string/virtual/blink
core-js(/library)/fn/string/virtual/bold
core-js(/library)/fn/string/virtual/fixed
core-js(/library)/fn/string/virtual/fontcolor
core-js(/library)/fn/string/virtual/fontsize
core-js(/library)/fn/string/virtual/italics
core-js(/library)/fn/string/virtual/link
core-js(/library)/fn/string/virtual/small
core-js(/library)/fn/string/virtual/strike
core-js(/library)/fn/string/virtual/sub
core-js(/library)/fn/string/virtual/sup
core-js(/library)/fn/string/virtual/iterator

Examples:

ECMAScript 6: RegExp

Modules es6.regexp.constructor and es6.regexp.flags.

[new] RegExp(pattern, flags?) -> regexp, ES6 fix: can alter flags (IE9+)
  #flags -> str (IE9+)
  #toString() -> str, ES6 fixes
  #@@match(str)             -> array | null
  #@@replace(str, replacer) -> string
  #@@search(str)            -> index
  #@@split(str, limit)      -> array
String
  #match(tpl)             -> var, ES6 fix for support @@match
  #replace(tpl, replacer) -> var, ES6 fix for support @@replace
  #search(tpl)            -> var, ES6 fix for support @@search
  #split(tpl, limit)      -> var, ES6 fix for support @@split, some fixes for old engines

CommonJS entry points:

core-js/es6/regexp
core-js/fn/regexp/constructor
core-js(/library)/fn/regexp/flags
core-js/fn/regexp/to-string
core-js/fn/regexp/match
core-js/fn/regexp/replace
core-js/fn/regexp/search
core-js/fn/regexp/split

Examples:

ECMAScript 6: Number

Module es6.number.constructor. Number constructor support binary and octal literals, example:

Modules es6.number.epsilon, es6.number.is-finite, es6.number.is-integer, es6.number.is-nan, es6.number.is-safe-integer, es6.number.max-safe-integer, es6.number.min-safe-integer, es6.number.parse-float, es6.number.parse-int, es6.number.to-fixed, es6.number.to-precision, es6.parse-int, es6.parse-float.

CommonJS entry points:

core-js(/library)/es6/number
core-js/es6/number/constructor
core-js(/library)/fn/number/is-finite
core-js(/library)/fn/number/is-nan
core-js(/library)/fn/number/is-integer
core-js(/library)/fn/number/is-safe-integer
core-js(/library)/fn/number/parse-float
core-js(/library)/fn/number/parse-int
core-js(/library)/fn/number/epsilon
core-js(/library)/fn/number/max-safe-integer
core-js(/library)/fn/number/min-safe-integer
core-js(/library)/fn/number/to-fixed
core-js(/library)/fn/number/to-precision
core-js(/library)/fn/parse-float
core-js(/library)/fn/parse-int

ECMAScript 6: Math

Modules es6.math.acosh, es6.math.asinh, es6.math.atanh, es6.math.cbrt, es6.math.clz32, es6.math.cosh, es6.math.expm1, es6.math.fround, es6.math.hypot, es6.math.imul, es6.math.log10, es6.math.log1p, es6.math.log2, es6.math.sign, es6.math.sinh, es6.math.tanh, es6.math.trunc.

CommonJS entry points:

core-js(/library)/es6/math
core-js(/library)/fn/math/acosh
core-js(/library)/fn/math/asinh
core-js(/library)/fn/math/atanh
core-js(/library)/fn/math/cbrt
core-js(/library)/fn/math/clz32
core-js(/library)/fn/math/cosh
core-js(/library)/fn/math/expm1
core-js(/library)/fn/math/fround
core-js(/library)/fn/math/hypot
core-js(/library)/fn/math/imul
core-js(/library)/fn/math/log1p
core-js(/library)/fn/math/log10
core-js(/library)/fn/math/log2
core-js(/library)/fn/math/sign
core-js(/library)/fn/math/sinh
core-js(/library)/fn/math/tanh
core-js(/library)/fn/math/trunc

ECMAScript 6: Date

Modules es6.date.to-string, ES5 features with fixes: es6.date.now, es6.date.to-iso-string, es6.date.to-json and es6.date.to-primitive.

CommonJS entry points:

core-js/es6/date
core-js/fn/date/to-string
core-js(/library)/fn/date/now
core-js(/library)/fn/date/to-iso-string
core-js(/library)/fn/date/to-json
core-js(/library)/fn/date/to-primitive

Example:

ECMAScript 6: Promise

Module es6.promise.

CommonJS entry points:

core-js(/library)/es6/promise
core-js(/library)/fn/promise

Basic example:

Promise.resolve and Promise.reject example:

Promise.all example:

Promise.race example:

ECMAScript 7 async functions example:

Unhandled rejection tracking

In Node.js, like in native implementation, available events unhandledRejection and rejectionHandled:

In a browser on rejection, by default, you will see notify in the console, or you can add a custom handler and a handler on handling unhandled, example:

ECMAScript 6: Symbol

Module es6.symbol.

Also wrapped some methods for correct work with Symbol polyfill.

CommonJS entry points:

core-js(/library)/es6/symbol
core-js(/library)/fn/symbol
core-js(/library)/fn/symbol/has-instance
core-js(/library)/fn/symbol/is-concat-spreadable
core-js(/library)/fn/symbol/iterator
core-js(/library)/fn/symbol/match
core-js(/library)/fn/symbol/replace
core-js(/library)/fn/symbol/search
core-js(/library)/fn/symbol/species
core-js(/library)/fn/symbol/split
core-js(/library)/fn/symbol/to-primitive
core-js(/library)/fn/symbol/to-string-tag
core-js(/library)/fn/symbol/unscopables
core-js(/library)/fn/symbol/for
core-js(/library)/fn/symbol/key-for

Basic example:

Symbol.for & Symbol.keyFor example:

Example with methods for getting own object keys:

Caveats when using Symbol polyfill:
  • We can’t add new primitive type, Symbol returns object.
  • Symbol.for and Symbol.keyFor can’t be shimmed cross-realm.
  • By default, to hide the keys, Symbol polyfill defines setter in Object.prototype. For this reason, uncontrolled creation of symbols can cause memory leak and the in operator is not working correctly with Symbol polyfill: Symbol() in {} // => true.

You can disable defining setters in Object.prototype. Example:

  • Currently, core-js not adds setters to Object.prototype for well-known symbols for correct work something like Symbol.iterator in foo. It can cause problems with their enumerability.
  • Some problems possible with environment exotic objects (for example, IE localStorage).

ECMAScript 6: Collections

core-js uses native collections in most case, just fixes methods / constructor, if it’s required, and in old environment uses fast polyfill (O(1) lookup). #### Map Module es6.map.

CommonJS entry points:

core-js(/library)/es6/map
core-js(/library)/fn/map

Examples:

Set

Module es6.set.

CommonJS entry points:

core-js(/library)/es6/set
core-js(/library)/fn/set

Examples:

WeakMap

Module es6.weak-map.

CommonJS entry points:

core-js(/library)/es6/weak-map
core-js(/library)/fn/weak-map

Examples:

WeakSet

Module es6.weak-set.

CommonJS entry points:

core-js(/library)/es6/weak-set
core-js(/library)/fn/weak-set

Examples:

Caveats when using collections polyfill:
  • Weak-collections polyfill stores values as hidden properties of keys. It works correct and not leak in most cases. However, it is desirable to store a collection longer than its keys.

ECMAScript 6: Typed Arrays

Implementations and fixes ArrayBuffer, DataView, typed arrays constructors, static and prototype methods. Typed Arrays work only in environments with support descriptors (IE9+), ArrayBuffer and DataView should work anywhere.

Modules es6.typed.array-buffer, es6.typed.data-view, es6.typed.int8-array, es6.typed.uint8-array, es6.typed.uint8-clamped-array, es6.typed.int16-array, es6.typed.uint16-array, es6.typed.int32-array, es6.typed.uint32-array, es6.typed.float32-array and es6.typed.float64-array.

new ArrayBuffer(length) -> buffer
  .isView(var) -> bool
  #slice(start = 0, end = @length) -> buffer
  #byteLength -> uint

new DataView(buffer, byteOffset = 0, byteLength = buffer.byteLength - byteOffset) -> view
  #getInt8(offset)                          -> int8
  #getUint8(offset)                         -> uint8
  #getInt16(offset, littleEndian = false)   -> int16
  #getUint16(offset, littleEndian = false)  -> uint16
  #getInt32(offset, littleEndian = false)   -> int32
  #getUint32(offset, littleEndian = false)  -> uint32
  #getFloat32(offset, littleEndian = false) -> float32
  #getFloat64(offset, littleEndian = false) -> float64
  #setInt8(offset, value)                          -> void
  #setUint8(offset, value)                         -> void
  #setInt16(offset, value, littleEndian = false)   -> void
  #setUint16(offset, value, littleEndian = false)  -> void
  #setInt32(offset, value, littleEndian = false)   -> void
  #setUint32(offset, value, littleEndian = false)  -> void
  #setFloat32(offset, value, littleEndian = false) -> void
  #setFloat64(offset, value, littleEndian = false) -> void
  #buffer     -> buffer
  #byteLength -> uint
  #byteOffset -> uint

{
  Int8Array,
  Uint8Array,
  Uint8ClampedArray,
  Int16Array,
  Uint16Array,
  Int32Array,
  Uint32Array,
  Float32Array,
  Float64Array
}
  new %TypedArray%(length)    -> typed
  new %TypedArray%(typed)     -> typed
  new %TypedArray%(arrayLike) -> typed
  new %TypedArray%(iterable)  -> typed
  new %TypedArray%(buffer, byteOffset = 0, length = (buffer.byteLength - byteOffset) / @BYTES_PER_ELEMENT) -> typed
  .BYTES_PER_ELEMENT -> uint
  .from(arrayLike | iterable, mapFn(val, index)?, that) -> typed
  .of(...args) -> typed
  #BYTES_PER_ELEMENT -> uint
  #copyWithin(target = 0, start = 0, end = @length) -> @
  #every(fn(val, index, @), that) -> bool
  #fill(val, start = 0, end = @length) -> @
  #filter(fn(val, index, @), that) -> typed
  #find(fn(val, index, @), that) -> val
  #findIndex(fn(val, index, @), that) -> index
  #forEach(fn(val, index, @), that) -> void
  #indexOf(var, from?) -> int
  #join(string = ',') -> string
  #lastIndexOf(var, from?) -> int
  #map(fn(val, index, @), that) -> typed
  #reduce(fn(memo, val, index, @), memo?) -> var
  #reduceRight(fn(memo, val, index, @), memo?) -> var
  #reverse() -> @
  #set(arrayLike, offset = 0) -> void
  #slice(start = 0, end = @length) -> typed
  #some(fn(val, index, @), that) -> bool
  #sort(fn(a, b)?) -> @
  #subarray(start = 0, end = @length) -> typed
  #toString() -> string
  #toLocaleString() -> string
  #values()     -> iterator
  #keys()       -> iterator
  #entries()    -> iterator
  #@@iterator() -> iterator (values)
  #buffer     -> buffer
  #byteLength -> uint
  #byteOffset -> uint
  #length     -> uint

CommonJS entry points:

core-js(/library)/es6/typed
core-js(/library)/fn/typed
core-js(/library)/fn/typed/array-buffer
core-js(/library)/fn/typed/data-view
core-js(/library)/fn/typed/int8-array
core-js(/library)/fn/typed/uint8-array
core-js(/library)/fn/typed/uint8-clamped-array
core-js(/library)/fn/typed/int16-array
core-js(/library)/fn/typed/uint16-array
core-js(/library)/fn/typed/int32-array
core-js(/library)/fn/typed/uint32-array
core-js(/library)/fn/typed/float32-array
core-js(/library)/fn/typed/float64-array

Examples:

Caveats when using typed arrays:
  • Typed Arrays polyfills works completely how should work by the spec, but because of internal use getter / setters on each instance, is slow and consumes significant memory. However, typed arrays polyfills required mainly for IE9 (and for Uint8ClampedArray in IE10 and early IE11), all modern engines have native typed arrays and requires only constructors fixes and methods.
  • The current version hasn’t special entry points for methods, they can be added only with constructors. It can be added in the future.
  • In the library version we can’t pollute native prototypes, so prototype methods available as constructors static.

ECMAScript 6: Reflect

Modules es6.reflect.apply, es6.reflect.construct, es6.reflect.define-property, es6.reflect.delete-property, es6.reflect.enumerate, es6.reflect.get, es6.reflect.get-own-property-descriptor, es6.reflect.get-prototype-of, es6.reflect.has, es6.reflect.is-extensible, es6.reflect.own-keys, es6.reflect.prevent-extensions, es6.reflect.set, es6.reflect.set-prototype-of.

CommonJS entry points:

core-js(/library)/es6/reflect
core-js(/library)/fn/reflect
core-js(/library)/fn/reflect/apply
core-js(/library)/fn/reflect/construct
core-js(/library)/fn/reflect/define-property
core-js(/library)/fn/reflect/delete-property
core-js(/library)/fn/reflect/enumerate (deprecated and will be removed from the next major release)
core-js(/library)/fn/reflect/get
core-js(/library)/fn/reflect/get-own-property-descriptor
core-js(/library)/fn/reflect/get-prototype-of
core-js(/library)/fn/reflect/has
core-js(/library)/fn/reflect/is-extensible
core-js(/library)/fn/reflect/own-keys
core-js(/library)/fn/reflect/prevent-extensions
core-js(/library)/fn/reflect/set
core-js(/library)/fn/reflect/set-prototype-of

Examples:

ECMAScript 7+ proposals

The TC39 process.

CommonJS entry points:

core-js(/library)/es7
core-js(/library)/es7/array
core-js(/library)/es7/global
core-js(/library)/es7/string
core-js(/library)/es7/map
core-js(/library)/es7/set
core-js(/library)/es7/error
core-js(/library)/es7/math
core-js(/library)/es7/system
core-js(/library)/es7/symbol
core-js(/library)/es7/reflect
core-js(/library)/es7/observable

core-js/stage/4 entry point contains only stage 4 proposals, core-js/stage/3 - stage 3 and stage 4, etc. #### Stage 4 proposals

CommonJS entry points:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

Stage 3 proposals

CommonJS entry points:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples: ```js Promise.resolve(42).finally(() => console.log(‘You will see it anyway’));

Promise.reject(42).finally(() => console.log(‘You will see it anyway’));

Stage 2 proposals

CommonJS entry points:

CommonJS entry points:

Examples:

* `Symbol.asyncIterator` for [async iteration proposal](https://github.com/tc39/proposal-async-iteration) - module [`es7.symbol.async-iterator`](https://github.com/zloirock/core-js/blob/v2.6.12/modules/es7.symbol.async-iterator.js)js Symbol .asyncIterator -> @@asyncIterator [*CommonJS entry points:*](#commonjs)js core-js(/library)/fn/symbol/async-iterator ```

Stage 1 proposals

CommonJS entry points:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

Examples:

CommonJS entry points:

CommonJS entry points:

Examples:

Stage 0 proposals

CommonJS entry points:

CommonJS entry points:

Examples:

CommonJS entry points:

CommonJS entry points:

CommonJS entry points:

CommonJS entry points:

Examples:

Pre-stage 0 proposals

CommonJS entry points:

CommonJS entry points:

Examples:

Web standards

CommonJS entry points:

setTimeout / setInterval

Module web.timers. Additional arguments fix for IE9-.

CommonJS entry points:

setImmediate

Module web.immediate. setImmediate proposal polyfill.

CommonJS entry points:

Examples:

Iterable DOM collections

Some DOM collections should have iterable interface or should be inherited from Array. That mean they should have keys, values, entries and @@iterator methods for iteration. So add them. Module web.dom.iterable:

CommonJS entry points:

Examples:

Non-standard

CommonJS entry points:

Object

Modules core.object.is-object, core.object.classof, core.object.define, core.object.make.

CommonJS entry points:

Object classify examples:

Object.define and Object.make examples:

Dict

Module core.dict. Based on TC39 discuss / strawman.

CommonJS entry points:

Dict create object without prototype from iterable or simple object.

Examples:

Dict.keys, Dict.values and Dict.entries returns iterators for objects.

Examples:

Basic dict operations for objects with prototype examples:

Other methods of Dict module are static equivalents of Array.prototype methods for dictionaries.

Examples:

Partial application

Module core.function.part.

CommonJS entry points:

Function#part partial apply function without this binding. Uses global variable _ (core._ for builds without global namespace pollution) as placeholder and not conflict with Underscore / LoDash.

Examples:

Number Iterator

Module core.number.iterator.

CommonJS entry points:

Examples:

Escaping strings

Modules core.regexp.escape, core.string.escape-html and core.string.unescape-html.

CommonJS entry points:

Examples:

delay

Module core.delay. Promise-returning delay function, esdiscuss.

CommonJS entry points:

Examples:

Helpers for iterators

Modules core.is-iterable, core.get-iterator, core.get-iterator-method - helpers for check iterability / get iterator in the library version or, for example, for arguments object:

CommonJS entry points:

Examples:

Missing polyfills

  • ES5 JSON is missing now only in IE7- and never will it be added to core-js, if you need it in these old browsers, many implementations are available, for example, json3.
  • ES6 String#normalize is not a very useful feature, but this polyfill will be very large. If you need it, you can use unorm.
  • ES6 Proxy can’t be polyfilled, but for Node.js / Chromium with additional flags you can try harmony-reflect for adapt old style Proxy API to final ES6 version.
  • ES6 logic for @@isConcatSpreadable and @@species (in most places) can be polyfilled without problems, but it will cause a serious slowdown in popular cases in some engines. It will be polyfilled when it will be implemented in modern engines.
  • ES7 SIMD. core-js doesn’t add polyfill of this feature because of large size and some other reasons. You can use this polyfill.
  • window.fetch is not a cross-platform feature, in some environments it makes no sense. For this reason, I don’t think it should be in core-js. Looking at a large number of requests it may be added in the future. Now you can use, for example, this polyfill.
  • ECMA-402 Intl is missed because of size. You can use this polyfill.


Path-to-RegExp

Turn an Express-style path string such as /user/:name into a regular expression.

Note: This is a legacy branch. You should upgrade to 1.x.

Usage

pathToRegexp(path, keys, options)

  • path A string in the express format, an array of such strings, or a regular expression
  • keys An array to be populated with the keys present in the url. Once the function completes, this will be an array of strings.
  • options
    • options.sensitive Defaults to false, set this to true to make routes case sensitive
    • options.strict Defaults to false, set this to true to make the trailing slash matter.
    • options.end Defaults to true, set this to false to only match the prefix of the URL.

Live Demo

You can see a live demo of this library in use at express-route-tester.



Emitter Build Status

Event emitter component.

Installation

$ component install component/emitter

API

Emitter(obj)

The Emitter may also be used as a mixin. For example a “plain” object may become an emitter, or you may extend an existing prototype.

As an Emitter instance:

As a mixin:

As a prototype mixin:

Emitter#on(event, fn)

Register an event handler fn.

Emitter#once(event, fn)

Register a single-shot event handler fn, removed immediately after it is invoked the first time.

Emitter#off(event, fn)

  • Pass event and fn to remove a listener.
  • Pass event to remove all listeners on that event.
  • Pass nothing to remove all listeners on all events.

Emitter#emit(event, …)

Emit an event with variable option args.

Emitter#listeners(event)

Return an array of callbacks, or an empty array.

Emitter#hasListeners(event)

Check if this emitter has event handlers.



cookie-signature

Sign and unsign cookies.

Example

Node’s event emitter for all engines.

This implements the Node.js [events][node.js docs] module for environments that do not have it, like browsers.

events currently matches the Node.js 11.13.0 API.

Note that the events module uses ES5 features. If you need to support very old browsers like IE8, use a shim like es5-shim. You need both the shim and the sham versions of es5-shim.

This module is maintained, but only by very few people. If you’d like to help, let us know in the Maintainer Needed issue!

Install

You usually do not have to install events yourself! If your code runs in Node.js, events is built in. If your code runs in the browser, bundlers like browserify or webpack also include the events module.

But if none of those apply, with npm do:

npm install events

Usage

API

See the [Node.js EventEmitter docs][node.js docs]. events currently matches the Node.js 11.13.0 API.

Contributing

PRs are very welcome! The main way to contribute to events is by porting features, bugfixes and tests from Node.js. Ideally, code contributions to this module are copy-pasted from Node.js and transpiled to ES5, rather than reimplemented from scratch. Matching the Node.js code as closely as possible makes maintenance simpler when new changes land in Node.js. This module intends to provide exactly the same API as Node.js, so features that are not available in the core events module will not be accepted. Feature requests should instead be directed at nodejs/node and will be added to this module once they are implemented in Node.js.

If there is a difference in behaviour between Node.js’s events module and this module, please open an issue!



Pluralize

NPM version NPM downloads Build status Test coverage File Size CDNJS

Pluralize and singularize any word.

Installation

npm install pluralize --save
yarn add pluralize
bower install pluralize --save

Node

AMD

<script> tag

Why?

This module uses a pre-defined list of rules, applied in order, to singularize or pluralize a given word. There are many cases where this is useful, such as any automation based on user input. For applications where the word(s) are known ahead of time, you can use a simple ternary (or function) which would be a much lighter alternative.

Usage

  • word: string The word to pluralize
  • count: number How many of the word exist
  • inclusive: boolean Whether to prefix with the number (e.g. 3 ducks)

Examples:

Flexible ascii progress bar.

Installation

Usage

First we create a ProgressBar, giving it a format string as well as the total, telling the progress bar when it will be considered complete. After that all we need to do is tick() appropriately.

Options

These are keys in the options object you can pass to the progress bar along with total as seen in the example above.

  • curr current completed index
  • total total number of ticks to complete
  • width the displayed width of the progress bar defaulting to total
  • stream the output stream defaulting to stderr
  • head head character defaulting to complete character
  • complete completion character defaulting to “=”
  • incomplete incomplete character defaulting to “-”
  • renderThrottle minimum time between updates in milliseconds defaulting to 16
  • clear option to clear the bar on completion defaulting to false
  • callback optional function to call when the progress bar completes

Tokens

These are tokens you can use in the format of your progress bar.

  • :bar the progress bar itself
  • :current current tick number
  • :total total ticks
  • :elapsed time elapsed in seconds
  • :percent completion percentage
  • :eta estimated completion time in seconds
  • :rate rate of ticks per second

Custom Tokens

You can define custom tokens by adding a {'name': value} object parameter to your method (tick(), update(), etc.) calls.

The above example would result in the output below.

1: Hello World!
3: Goodbye World!

Examples

Download

In our download example each tick has a variable influence, so we pass the chunk length which adjusts the progress bar appropriately relative to the total length.

The above example result in a progress bar like the one below.

downloading [=====             ] 39/bps 29% 3.7s

Interrupt

To display a message during progress bar execution, use interrupt()

You can see more examples in the examples folder.



delayed-stream

Buffers events from a stream until you are ready to handle them.

Installation

Usage

The following example shows how to write a http echo server that delays its response by 1000 ms.

If you are not using Stream#pipe, you can also manually release the buffered events by calling delayedStream.resume():

Implementation

In order to use this meta stream properly, here are a few things you should know about the implementation.

Event Buffering / Proxying

All events of the source stream are hijacked by overwriting the source.emit method. Until node implements a catch-all event listener, this is the only way.

However, delayed-stream still continues to emit all events it captures on the source, regardless of whether you have released the delayed stream yet or not.

Upon creation, delayed-stream captures all source events and stores them in an internal event buffer. Once delayedStream.release() is called, all buffered events are emitted on the delayedStream, and the event buffer is cleared. After that, delayed-stream merely acts as a proxy for the underlaying source.

Error handling

Error events on source are buffered / proxied just like any other events. However, delayedStream.create attaches a no-op 'error' listener to the source. This way you only have to handle errors on the delayedStream object, rather than in two places.

Buffer limits

delayed-stream provides a maxDataSize property that can be used to limit the amount of data being buffered. In order to protect you from bad source streams that don’t react to source.pause(), this feature is enabled by default.

API

DelayedStream.create(source, options)

Returns a new delayedStream. Available options are:

  • pauseStream
  • maxDataSize

The description for those properties can be found below.

delayedStream.source

The source stream managed by this object. This is useful if you are passing your delayedStream around, and you still want to access properties on the source object.

delayedStream.pauseStream = true

Whether to pause the underlaying source when calling DelayedStream.create(). Modifying this property afterwards has no effect.

delayedStream.maxDataSize = 1024 * 1024

The amount of data to buffer before emitting an error.

If the underlaying source is emitting Buffer objects, the maxDataSize refers to bytes.

If the underlaying source is emitting JavaScript strings, the size refers to characters.

If you know what you are doing, you can set this property to Infinity to disable this feature. You can also modify this property during runtime.

delayedStream.dataSize = 0

The amount of data buffered so far.

delayedStream.readable

An ECMA5 getter that returns the value of source.readable.

delayedStream.resume()

If the delayedStream has not been released so far, delayedStream.release() is called.

In either case, source.resume() is called.

delayedStream.pause()

Calls source.pause().

delayedStream.pipe(dest)

Calls delayedStream.resume() and then proxies the arguments to source.pipe.

delayedStream.release()

Emits and clears all events that have been buffered up so far. This does not resume the underlaying source, use delayedStream.resume() instead.



Bytes utility

NPM Version NPM Downloads Build Status Test Coverage

Utility to parse a string bytes (ex: 1TB) to bytes (1099511627776) and vice-versa.

Installation

This is a Node.js module available through the npm registry. Installation is done using the npm install command:

Usage

bytes.format(number value, options): string|null

Format the given value in bytes into a string. If the value is negative, it is kept as such. If it is a float, it is rounded.

Arguments

Name Type Description
value number Value in bytes
options Object Conversion options

Options

Property Type Description
decimalPlaces numbernull Maximum number of decimal places to include in output. Default value to 2.
fixedDecimals booleannull Whether to always display the maximum number of decimal places. Default value to false
thousandsSeparator stringnull Example of values: ', ',' and .… Default value to ''.
unit stringnull The unit in which the result will be returned (B/KB/MB/GB/TB). Default value to '' (which means auto detect).
unitSeparator stringnull Separator to use between number and unit. Default value to ''.

Returns

Name Type Description
results stringnull Return null upon error. String value otherwise.

Example

bytes.parse(string|number value): number|null

Parse the string value into an integer in bytes. If no unit is given, or value is a number, it is assumed the value is in bytes.

  • b for bytes
  • kb for kilobytes
  • mb for megabytes
  • gb for gigabytes
  • tb for terabytes
  • pb for petabytes

The units are in powers of two, not ten. This means 1kb = 1024b according to this parser.

Arguments

Name Type Description
value stringnumber String to parse, or number in bytes.

Returns

Name Type Description
results numbernull Return null upon error. Value in bytes otherwise.

Example



combined-stream

A stream that emits multiple other streams one after another.

NB Currently combined-stream works with streams version 1 only. There is ongoing effort to switch this library to streams version 2. Any help is welcome. :) Meanwhile you can explore other libraries that provide streams2 support with more or less compatibility with combined-stream.

  • combined-stream2: A drop-in streams2-compatible replacement for the combined-stream module.

  • multistream: A stream that emits multiple other streams one after another.

Installation

Usage

Here is a simple example that shows how you can use combined-stream to combine two files into one:

While the example above works great, it will pause all source streams until they are needed. If you don’t want that to happen, you can set pauseStreams to false:

However, what if you don’t have all the source streams yet, or you don’t want to allocate the resources (file descriptors, memory, etc.) for them right away? Well, in that case you can simply provide a callback that supplies the stream by calling a next() function:

API

CombinedStream.create(options)

Returns a new combined stream object. Available options are:

  • maxDataSize
  • pauseStreams

The effect of those options is described below.

combinedStream.pauseStreams = true

Whether to apply back pressure to the underlaying streams. If set to false, the underlaying streams will never be paused. If set to true, the underlaying streams will be paused right after being appended, as well as when delayedStream.pipe() wants to throttle.

combinedStream.maxDataSize = 2 * 1024 * 1024

The maximum amount of bytes (or characters) to buffer for all source streams. If this value is exceeded, combinedStream emits an 'error' event.

combinedStream.dataSize = 0

The amount of bytes (or characters) currently buffered by combinedStream.

combinedStream.append(stream)

Appends the given stream to the combinedStream object. If pauseStreams is set to `true, this stream will also be paused right away.

streams can also be a function that takes one parameter called next. next is a function that must be invoked in order to provide the next stream, see example above.

Regardless of how the stream is appended, combined-stream always attaches an 'error' listener to it, so you don’t have to do that manually.

Special case: stream can also be a String or Buffer.

combinedStream.write(data)

You should not call this, combinedStream takes care of piping the appended streams into itself for you.

combinedStream.resume()

Causes combinedStream to start drain the streams it manages. The function is idempotent, and also emits a 'resume' event each time which usually goes to the stream that is currently being drained.

combinedStream.pause();

If combinedStream.pauseStreams is set to false, this does nothing. Otherwise a 'pause' event is emitted, this goes to the stream that is currently being drained, so you can use it to apply back pressure.

combinedStream.end();

Sets combinedStream.writable to false, emits an 'end' event, and removes all streams from the queue.

combinedStream.destroy();

Same as combinedStream.end(), except it emits a 'close' event instead of 'end'.

Express Logo

Fast, unopinionated, minimalist web framework for node.

NPM Version NPM Downloads Linux Build Windows Build Test Coverage

Installation

This is a Node.js module available through the npm registry.

Before installing, download and install Node.js. Node.js 0.10 or higher is required.

Installation is done using the npm install command:

Follow our installing guide for more information.

Features

  • Robust routing
  • Focus on high performance
  • Super-high test coverage
  • HTTP helpers (redirection, caching, etc)
  • View system supporting 14+ template engines
  • Content negotiation
  • Executable for generating applications quickly

Docs & Community

PROTIP Be sure to read Migrating from 3.x to 4.x as well as New features in 4.x.

Security Issues

If you discover a security vulnerability in Express, please see Security Policies and Procedures.

Quick Start

The quickest way to get started with express is to utilize the executable express(1) to generate an application as shown below:

Install the executable. The executable’s major version will match Express’s:

Create the app:

Install dependencies:

Start the server:

View the website at: http://localhost:3000

Philosophy

The Express philosophy is to provide small, robust tooling for HTTP servers, making it a great solution for single page applications, web sites, hybrids, or public HTTP APIs.

Express does not force you to use any specific ORM or template engine. With support for over 14 template engines via Consolidate.js, you can quickly craft your perfect framework.

Examples

To view the examples, clone the Express repo and install the dependencies:

Then run whichever example you want:

Tests

To run the test suite, first install the dependencies, then run npm test:

Contributing

Contributing Guide

People

The original author of Express is TJ Holowaychuk

The current lead maintainer is Douglas Christopher Wilson

List of all contributors



safer-buffer travis npm javascript style guide Security Responsible Disclosure

Modern Buffer API polyfill without footguns, working on Node.js from 0.8 to current.

How to use?

First, port all Buffer() and new Buffer() calls to Buffer.alloc() and Buffer.from() API.

Then, to achieve compatibility with outdated Node.js versions (<4.5.0 and 5.x <5.9.0), use const Buffer = require('safer-buffer').Buffer in all files where you make calls to the new Buffer API. Use var instead of const if you need that for your Node.js version range support.

Also, see the porting Buffer guide.

Do I need it?

Hopefully, not — dropping support for outdated Node.js versions should be fine nowdays, and that is the recommended path forward. You do need to port to the Buffer.alloc() and Buffer.from() though.

See the porting guide for a better description.

Why not safe-buffer?

In short: while safe-buffer serves as a polyfill for the new API, it allows old API usage and itself contains footguns.

safe-buffer could be used safely to get the new API while still keeping support for older Node.js versions (like this module), but while analyzing ecosystem usage of the old Buffer API I found out that safe-buffer is itself causing problems in some cases.

For example, consider the following snippet:

$ cat example.unsafe.js
console.log(Buffer(20))
$ ./node-v6.13.0-linux-x64/bin/node example.unsafe.js
<Buffer 0a 00 00 00 00 00 00 00 28 13 de 02 00 00 00 00 05 00 00 00>
$ standard example.unsafe.js
standard: Use JavaScript Standard Style (https://standardjs.com)
  /home/chalker/repo/safer-buffer/example.unsafe.js:2:13: 'Buffer()' was deprecated since v6. Use 'Buffer.alloc()' or 'Buffer.from()' (use 'https://www.npmjs.com/package/safe-buffer' for '<4.5.0') instead.

This is allocates and writes to console an uninitialized chunk of memory. standard linter (among others) catch that and warn people to avoid using unsafe API.

Let’s now throw in safe-buffer!

$ cat example.safe-buffer.js
const Buffer = require('safe-buffer').Buffer
console.log(Buffer(20))
$ standard example.safe-buffer.js
$ ./node-v6.13.0-linux-x64/bin/node example.safe-buffer.js
<Buffer 08 00 00 00 00 00 00 00 28 58 01 82 fe 7f 00 00 00 00 00 00>

See the problem? Adding in safe-buffer magically removes the lint warning, but the behavior remains identiсal to what we had before, and when launched on Node.js 6.x LTS — this dumps out chunks of uninitialized memory. And this code will still emit runtime warnings on Node.js 10.x and above.

That was done by design. I first considered changing safe-buffer, prohibiting old API usage or emitting warnings on it, but that significantly diverges from safe-buffer design. After some discussion, it was decided to move my approach into a separate package, and this is that separate package.

This footgun is not imaginary — I observed top-downloaded packages doing that kind of thing, «fixing» the lint warning by blindly including safe-buffer without any actual changes.

Also in some cases, even if the API was migrated to use of safe Buffer API — a random pull request can bring unsafe Buffer API usage back to the codebase by adding new calls — and that could go unnoticed even if you have a linter prohibiting that (becase of the reason stated above), and even pass CI. I also observed that being done in popular packages.

Some examples: * webdriverio (a module with 548 759 downloads/month), * websocket-stream (218 288 d/m, fix in maxogden/websocket-stream#142), * node-serialport (113 138 d/m, fix in node-serialport/node-serialport#1510), * karma (3 973 193 d/m, fix in karma-runner/karma#2947), * spdy-transport (5 970 727 d/m, fix in spdy-http2/spdy-transport#53). * And there are a lot more over the ecosystem.

I filed a PR at mysticatea/eslint-plugin-node#110 to partially fix that (for cases when that lint rule is used), but it is a semver-major change for linter rules and presets, so it would take significant time for that to reach actual setups. It also hasn’t been released yet (2018-03-20).

Also, safer-buffer discourages the usage of .allocUnsafe(), which is often done by a mistake. It still supports it with an explicit concern barier, by placing it under require('safer-buffer/dangereous').

But isn’t throwing bad?

Not really. It’s an error that could be noticed and fixed early, instead of causing havoc later like unguarded new Buffer() calls that end up receiving user input can do.

This package affects only the files where var Buffer = require('safer-buffer').Buffer was done, so it is really simple to keep track of things and make sure that you don’t mix old API usage with that. Also, CI should hint anything that you might have missed.

New commits, if tested, won’t land new usage of unsafe Buffer API this way. Node.js 10.x also deals with that by printing a runtime depecation warning.

Would it affect third-party modules?

No, unless you explicitly do an awful thing like monkey-patching or overriding the built-in Buffer. Don’t do that.

But I don’t want throwing…

That is also fine!

Also, it could be better in some cases when you don’t comprehensive enough test coverage.

In that case — just don’t override Buffer and use var SaferBuffer = require('safer-buffer').Buffer instead.

That way, everything using Buffer natively would still work, but there would be two drawbacks:

  • Buffer.from/Buffer.alloc won’t be polyfilled — use SaferBuffer.from and SaferBuffer.alloc instead.
  • You are still open to accidentally using the insecure deprecated API — use a linter to catch that.

Note that using a linter to catch accidential Buffer constructor usage in this case is strongly recommended. Buffer is not overriden in this usecase, so linters won’t get confused.

«Without footguns»?

Well, it is still possible to do some things with Buffer API, e.g. accessing .buffer property on older versions and duping things from there. You shouldn’t do that in your code, probabably.

The intention is to remove the most significant footguns that affect lots of packages in the ecosystem, and to do it in the proper way.

Also, this package doesn’t protect against security issues affecting some Node.js versions, so for usage in your own production code, it is still recommended to update to a Node.js version supported by upstream.



depd

NPM Version NPM Downloads Node.js Version Linux Build Windows Build Coverage Status

Deprecate all the things

With great modules comes great responsibility; mark things deprecated!

Install

This module is installed directly using npm:

This module can also be bundled with systems like Browserify or webpack, though by default this module will alter it’s API to no longer display or track deprecations.

API

This library allows you to display deprecation messages to your users. This library goes above and beyond with deprecation warnings by introspection of the call stack (but only the bits that it is interested in).

Instead of just warning on the first invocation of a deprecated function and never again, this module will warn on the first invocation of a deprecated function per unique call site, making it ideal to alert users of all deprecated uses across the code base, rather than just whatever happens to execute first.

The deprecation warnings from this module also include the file and line information for the call into the module that the deprecated function was in.

NOTE this library has a similar interface to the debug module, and this module uses the calling file to get the boundary for the call stacks, so you should always create a new deprecate object in each file and not within some central file.

depd(namespace)

Create a new deprecate function that uses the given namespace name in the messages and will display the call site prior to the stack entering the file this function was called from. It is highly suggested you use the name of your module as the namespace.

deprecate(message)

Call this function from deprecated code to display a deprecation message. This message will appear once per unique caller site. Caller site is the first call site in the stack in a different file from the caller of this function.

If the message is omitted, a message is generated for you based on the site of the deprecate() call and will display the name of the function called, similar to the name displayed in a stack trace.

deprecate.function(fn, message)

Call this function to wrap a given function in a deprecation message on any call to the function. An optional message can be supplied to provide a custom message.

deprecate.property(obj, prop, message)

Call this function to wrap a given property on object in a deprecation message on any accessing or setting of the property. An optional message can be supplied to provide a custom message.

The method must be called on the object where the property belongs (not inherited from the prototype).

If the property is a data descriptor, it will be converted to an accessor descriptor in order to display the deprecation message.

process.on(‘deprecation’, fn)

This module will allow easy capturing of deprecation errors by emitting the errors as the type “deprecation” on the global process. If there are no listeners for this type, the errors are written to STDERR as normal, but if there are any listeners, nothing will be written to STDERR and instead only emitted. From there, you can write the errors in a different format or to a logging source.

The error represents the deprecation and is emitted only once with the same rules as writing to STDERR. The error has the following properties:

  • message - This is the message given by the library
  • name - This is always 'DeprecationError'
  • namespace - This is the namespace the deprecation came from
  • stack - This is the stack of the call to the deprecated thing

Example error.stack output:

DeprecationError: my-cool-module deprecated oldfunction
    at Object.<anonymous> ([eval]-wrapper:6:22)
    at Module._compile (module.js:456:26)
    at evalScript (node.js:532:25)
    at startup (node.js:80:7)
    at node.js:902:3

process.env.NO_DEPRECATION

As a user of modules that are deprecated, the environment variable NO_DEPRECATION is provided as a quick solution to silencing deprecation warnings from being output. The format of this is similar to that of DEBUG:

This will suppress deprecations from being output for “my-module” and “othermod”. The value is a list of comma-separated namespaces. To suppress every warning across all namespaces, use the value * for a namespace.

Providing the argument --no-deprecation to the node executable will suppress all deprecations (only available in Node.js 0.8 or higher).

NOTE This will not suppress the deperecations given to any “deprecation” event listeners, just the output to STDERR.

process.env.TRACE_DEPRECATION

As a user of modules that are deprecated, the environment variable TRACE_DEPRECATION is provided as a solution to getting more detailed location information in deprecation warnings by including the entire stack trace. The format of this is the same as NO_DEPRECATION:

This will include stack traces for deprecations being output for “my-module” and “othermod”. The value is a list of comma-separated namespaces. To trace every warning across all namespaces, use the value * for a namespace.

Providing the argument --trace-deprecation to the node executable will trace all deprecations (only available in Node.js 0.8 or higher).

NOTE This will not trace the deperecations silenced by NO_DEPRECATION.

Display

message
message

When a user calls a function in your library that you mark deprecated, they will see the following written to STDERR (in the given colors, similar colors and layout to the debug module):

bright cyan    bright yellow
|              |          reset       cyan
|              |          |           |
▼              ▼          ▼           ▼
my-cool-module deprecated oldfunction [eval]-wrapper:6:22
▲              ▲          ▲           ▲
|              |          |           |
namespace      |          |           location of mycoolmod.oldfunction() call
               |          deprecation message
               the word "deprecated"

If the user redirects their STDERR to a file or somewhere that does not support colors, they see (similar layout to the debug module):

Sun, 15 Jun 2014 05:21:37 GMT my-cool-module deprecated oldfunction at [eval]-wrapper:6:22
▲                             ▲              ▲          ▲              ▲
|                             |              |          |              |
timestamp of message          namespace      |          |             location of mycoolmod.oldfunction() call
                                             |          deprecation message
                                             the word "deprecated"

Examples

Deprecating all calls to a function

This will display a deprecated message about “oldfunction” being deprecated from “my-module” on STDERR.

Conditionally deprecating a function call

This will display a deprecated message about “weirdfunction” being deprecated from “my-module” on STDERR when called with less than 2 arguments.

When calling deprecate as a function, the warning is counted per call site within your own module, so you can display different deprecations depending on different situations and the users will still get all the warnings:

Deprecating property access

This will display a deprecated message about “oldprop” being deprecated from “my-module” on STDERR when accessed. A deprecation will be displayed when setting the value and when getting the value.



trim-newlines Build Status

Trim newlines from the start and/or end of a string

Install

npm install trim-newlines

Usage

API

trimNewlines(string)

Trim from the start and end of a string.

trimNewlines.start(string)

Trim from the start of a string.

trimNewlines.end(string)

Trim from the end of a string.

  • trim-left - Similar to String#trim() but removes only whitespace on the left
  • trim-right - Similar to String#trim() but removes only whitespace on the right.


irregular-plurals Build Status

Map of nouns to their irregular plural form

An irregular plural in this library is defined as a noun that cannot be made plural by applying these rules: - If the noun ends in an “s”, “x”, “z”, “ch” or “sh”, add “es” - If the noun ends in a “y” and is preceded by a consonant, drop the “y” and add “ies” - If the noun ends in a “y” and is preceded by a vowel, add “s”

The list is just a JSON file and can be used anywhere.

Install

npm install irregular-plurals

Usage

  • plur - Pluralize a word


escape-string-regexp Build Status

Escape RegExp special characters

Install

npm install escape-string-regexp

Usage

You can also use this to escape a string that is inserted into the middle of a regex, for example, into a character class.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-escape-string-regexp?utm_source=npm-escape-string-regexp&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


mimic-response Build Status

Mimic a Node.js HTTP response stream

Install

npm install mimic-response

Usage

API

mimicResponse(from, to)

from

Type: Stream

Node.js HTTP response stream.

to

Type: Stream

Any stream.



path-dirname Build Status

Node.js path.dirname() ponyfill

This was needed in order to expose path.posix.dirname() on Node.js v0.10

Install

npm install --save path-dirname

Usage

API

See the path.dirname() docs.

pathDirname(path)

pathDirname.posix(path)

POSIX specific version.

pathDirname.win32(path)

Windows specific version.



path-type Build Status

Check if a path is a file, directory, or symlink

Install

npm install path-type

Usage

API

isFile(path)

Check whether the passed path is a file.

Returns a Promise<boolean>.

path

Type: string

The path to check.

isDirectory(path)

Check whether the passed path is a directory.

Returns a Promise<boolean>.

isSymlink(path)

Check whether the passed path is a symlink.

Returns a Promise<boolean>.

isFileSync(path)

Synchronously check whether the passed path is a file.

Returns a boolean.

isDirectorySync(path)

Synchronously check whether the passed path is a directory.

Returns a boolean.

isSymlinkSync(path)

Synchronously check whether the passed path is a symlink.

Returns a boolean.



cli-cursor Build Status

Toggle the CLI cursor

The cursor is gracefully restored if the process exits.

Install

npm install cli-cursor

Usage

API

.show(stream?)

.hide(stream?)

.toggle(force?, stream?)

force

Useful for showing or hiding the cursor based on a boolean.

stream

Type: stream.Writable
Default: process.stderr


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-cli-cursor?utm_source=npm-cli-cursor&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


date-time Build Status

Pretty datetime: 2014-01-09 06:46:01

Install

npm install date-time

Usage

API

dateTime(options)

options

Type: Object

date

Type: Date
Default: new Date()

Custom date.

local

Type: boolean
Default: true

Show the date in the local time zone.

showTimeZone

Type: boolean
Default: false

Show the UTC time zone postfix.

showMilliseconds

Type: boolean
Default: false

Show the milliseconds in the date if any.



plur Build Status

Pluralize a word

Install

npm install plur

Usage

API

plur(word, plural?, count)

word

Type: string

Word to pluralize.

plural

Type: string
Default:

  • Irregular nouns will use this list.
  • Words ending in s, x, z, ch, sh will be pluralized with -es (eg. foxes).
  • Words ending in y that are preceded by a consonant will be pluralized by replacing y with -ies (eg. puppies).
  • All other words will have “s” added to the end (eg. days).

Pluralized word.

The plural suffix will match the case of the last letter in the word.

This option is only for extreme edge-cases. You probably won’t need it.

count

Type: number

Count to determine whether to use singular or plural.



binary-extensions Build Status

List of binary file extensions

The list is just a JSON file and can be used anywhere.

Install

npm install binary-extensions

Usage


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-binary-extensions?utm_source=npm-binary-extensions&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


path-is-absolute Build Status

Node.js 0.12 path.isAbsolute() ponyfill

Install

npm install --save path-is-absolute

Usage

API

See the path.isAbsolute() docs.

pathIsAbsolute(path)

pathIsAbsolute.posix(path)

POSIX specific version.

pathIsAbsolute.win32(path)

Windows specific version.



latest-version Build Status

Get the latest version of an npm package

Fetches the version directly from the registry instead of depending on the massive npm module like the latest module does.

Install

npm install latest-version

Usage



import-local Build Status

Let a globally installed package use a locally installed version of itself if available

Useful for CLI tools that want to defer to the user’s locally installed version when available, but still work if it’s not installed locally. For example, AVA and XO uses this method.

Install

npm install import-local

Usage


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-import-local?utm_source=npm-import-local&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


indent-string Build Status

Indent each line in a string

Install

npm install indent-string

Usage

API

indentString(string, count, options)

string

Type: string

The string to indent.

count

Type: number
Default: 1

How many times you want options.indent repeated.

options

Type: object

indent

Type: string
Default: '

The string to use for the indent.

includeEmptyLines

Type: boolean
Default: false

Also indent empty lines.



indent-string Build Status

Indent each line in a string

Install

npm install indent-string

Usage

API

indentString(input, count, options)

input

Type: string

String you want to indent.

count

Type: number
Default: 1

How many times you want indent repeated.

options

Type: Object

indent

Type: string
Default: '

String to use for the indent.

includeEmptyLines

Type: boolean
Default: false

Also indent empty lines.



registry-url Build Status

Get the set npm registry URL

It’s usually https://registry.npmjs.org/, but it’s configurable.

Use this if you do anything with the npm registry as users will expect it to use their configured registry.

Install

npm install registry-url

Usage

It can also retrieve the registry URL associated with an npm scope.

If the provided scope is not in the user’s .npmrc file, then registry-url will check for the existence of registry, or if that’s not set, fallback to the default npm registry.



mimic-fn Build Status

Make a function mimic another one

Useful when you wrap a function in another function and like to preserve the original name and other properties.

Install

npm install mimic-fn

Usage

API

It will copy over the properties name, length, displayName, and any custom properties you may have set.

mimicFn(to, from)

Modifies the to function and returns it.

to

Type: Function

Mimicking function.

from

Type: Function

Function to mimic.

  • rename-fn - Rename a function
  • keep-func-props - Wrap a function without changing its name, length and other properties


line-column-path Build Status

Parse and stringify file paths with line and column like unicorn.js:8:14

Install

npm install line-column-path

Usage

API

.parse(input)

input

Type: string | object

File path to parse.

Can also be an object that you want to validate and normalize.

.stringify(path, options)

path

Type: object

Object with a .file property and optionally a .line and .column property.

options

Type: object

file

Type: boolean
Default: true

Output the file path.

Setting this to false will result in 8:18 instead of unicorn.js:8:14.

column

Type: boolean
Default: true

Output the column.

Setting this to false will result in unicorn.js:8 instead of unicorn.js:8:14.



string-width Build Status

Get the visual width of a string - the number of columns required to display it

Some Unicode characters are fullwidth and use double the normal width. ANSI escape codes are stripped and doesn’t affect the width.

Useful to be able to measure the actual width of command-line output.

Install

npm install string-width

Usage



pkg-dir Build Status

Find the root directory of a Node.js project or npm package

Install

npm install --save pkg-dir

Usage

/
└── Users
    └── sindresorhus
        └── foo
            ├── package.json
            └── bar
                ├── baz
                └── example.js

API

pkgDir(cwd)

Returns a Promise for either the project root path or null if it couldn’t be found.

pkgDir.sync(cwd)

Returns the project root path or null.

cwd

Type: string
Default: process.cwd()

Directory to start from.

  • pkg-dir-cli - CLI for this module
  • pkg-up - Find the closest package.json file
  • find-up - Find a file by walking up parent directories


decamelize-keys Build Status

Convert object keys from camelCase to lowercase with a custom separator using decamelize

This project was forked from camelcase-keys and converted to do the opposite

Install

Usage

API

decamelizeKeys(input, separator, options)

input

Type: object
Required

Object to decamelize.

separator

Type: string
Default: _

A string to insert between words.

options

Type: object

separator

Type: string
Default: _

Alternative way to specify separator.

exclude

Type: array
Default: []

Exclude keys from being decamelized.

See camelcase-keys for the inverse.



pupa Build Status

Simple micro templating

Useful when all you need is to fill in some placeholders.

Install

npm install pupa

Usage

API

pupa(template, data)

template

Type: string

Text with placeholders for data properties.

data

Type: object | unknown[]

Data to interpolate into template.

FAQ

What about template literals?

Template literals expand on creation. This module expands the template on execution, which can be useful if either or both template and data are lazily created or user-supplied.



parse-json Build Status

Parse JSON with more helpful errors

Install

npm install parse-json

Usage

API

parseJson(input, reviver, filename)

input

Type: string

reviver

Type: Function

Prescribes how the value originally produced by parsing is transformed, before being returned. See JSON.parse docs for more.

filename

Type: string

Filename displayed in the error message.



md5-hex Build Status

Create a MD5 hash with hex encoding

Please don’t use MD5 hashes for anything sensitive!

Works in the browser too, when used with a bundler like Webpack, Rollup, Browserify.

Checkout hasha if you need something more flexible.

Install

npm install md5-hex

Usage

API

md5Hex(data)

data

Type: Buffer | string | Array<Buffer | string>

Prefer buffers as they’re faster to hash, but strings can be useful for small things.

Pass an array instead of concatenating strings and/or buffers. The output is the same, but arrays do not incur the overhead of concatenation.

  • crypto-hash - Tiny hashing module that uses the native crypto API in Node.js and the browser
  • hasha - Hashing made simple
  • hash-obj - Get the hash of an object


parse-json Build Status

Parse JSON with more helpful errors

Install

npm install --save parse-json

Usage

API

parseJson(input, reviver, filename)

input

Type: string

reviver

Type: function

Prescribes how the value originally produced by parsing is transformed, before being returned. See JSON.parse docs for more.

filename

Type: string

Filename displayed in the error message.



pkg-dir Build Status

Find the root directory of a Node.js project or npm package

Install

npm install pkg-dir

Usage

/
└── Users
    └── sindresorhus
        └── foo
            ├── package.json
            └── bar
                ├── baz
                └── example.js

API

pkgDir(cwd)

Returns a Promise for either the project root path or undefined if it couldn’t be found.

pkgDir.sync(cwd)

Returns the project root path or undefined if it couldn’t be found.

cwd

Type: string
Default: process.cwd()

Directory to start from.

  • pkg-dir-cli - CLI for this module
  • pkg-up - Find the closest package.json file
  • find-up - Find a file by walking up parent directories


path-exists Build Status

Check if a path exists

Because fs.exists() is being deprecated, but there’s still a genuine use-case of being able to check if a path exists for other purposes than doing IO with it.

Never use this before handling a file though:

In particular, checking if a file exists before opening it is an anti-pattern that leaves you vulnerable to race conditions: another process may remove the file between the calls to fs.exists() and fs.open(). Just open the file and handle the error when it’s not there.

Install

npm install --save path-exists

Usage

API

pathExists(path)

Returns a promise for a boolean of whether the path exists.

pathExists.sync(path)

Returns a boolean of whether the path exists.



path-exists Build Status

Check if a path exists

Because fs.exists() is being deprecated, but there’s still a genuine use-case of being able to check if a path exists for other purposes than doing IO with it.

Never use this before handling a file though:

In particular, checking if a file exists before opening it is an anti-pattern that leaves you vulnerable to race conditions: another process may remove the file between the calls to fs.exists() and fs.open(). Just open the file and handle the error when it’s not there.

Install

npm install --save path-exists

Usage

API

pathExists(path)

Returns a promise for a boolean of whether the path exists.

pathExists.sync(path)

Returns a boolean of whether the path exists.



path-exists Build Status

Check if a path exists

Because fs.exists() is being deprecated, but there’s still a genuine use-case of being able to check if a path exists for other purposes than doing IO with it.

Never use this before handling a file though:

In particular, checking if a file exists before opening it is an anti-pattern that leaves you vulnerable to race conditions: another process may remove the file between the calls to fs.exists() and fs.open(). Just open the file and handle the error when it’s not there.

Install

npm install --save path-exists

Usage

API

pathExists(path)

Returns a promise for a boolean of whether the path exists.

pathExists.sync(path)

Returns a boolean of whether the path exists.



p-try Build Status

Start a promise chain

How is it useful?

Install

npm install p-try

Usage

API

pTry(fn, …arguments)

Returns a Promise resolved with the value of calling fn(...arguments). If the function throws an error, the returned Promise will be rejected with that error.

fn

The function to run to start the promise chain.

arguments

Arguments to pass to fn.

  • p-finally - Promise#finally() ponyfill - Invoked when the promise is settled regardless of outcome
  • More…


path-key Build Status

Get the PATH environment variable key cross-platform

It’s usually PATH, but on Windows it can be any casing like Path

Install

npm install path-key

Usage

API

pathKey(options?)

options

Type: object

env

Type: object
Default: process.env

Use a custom environment variables object.

platform

Type: string
Default: process.platform

Get the PATH key for a specific platform.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-path-key?utm_source=npm-path-key&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


dir-glob Build Status

Convert directories to glob compatible strings

Install

npm install dir-glob

Usage

API

dirGlob(input, options?)

Returns a Promise<string[]> with globs.

dirGlob.sync(input, options?)

Returns a string[] with globs.

input

Type: string | string[]

Paths.

options

Type: object

extensions

Type: string[]

Append extensions to the end of your globs.

files

Type: string[]

Only glob for certain files.

cwd

Type: string[]

Test in specific directory.



load-json-file Build Status

Read and parse a JSON file

Strips UTF-8 BOM, uses graceful-fs, and throws more helpful JSON errors.

Install

npm install load-json-file

Usage

API

loadJsonFile(filePath, options)

Returns a promise for the parsed JSON.

loadJsonFile.sync(filepath, options)

Returns the parsed JSON.

options

Type: Object

beforeParse

Type: Function

Applies a function to the JSON string before parsing.

reviver

Type: Function

Prescribes how the value originally produced by parsing is transformed, before being returned. See the JSON.parse docs for more.



term-size Build Status

Reliably get the terminal window size

Because process.stdout.columns doesn’t exist when run non-interactively, for example, in a child process or when piped. This module even works when all the TTY file descriptors are redirected!

Confirmed working on macOS, Linux, and Windows.

Install

npm install term-size

Usage

API

termSize()

Returns an object with columns and rows properties.

Info

The bundled macOS binary is signed and hardened.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-term-size?utm_source=npm-term-size&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


import-modules

Import all modules in a directory

This package is intentionally simple. Not interested in more features.

Install

npm install import-modules

Usage

.
└── directory
    ├── foo-bar.js
    └── baz-faz.js

API

importModules(directory?, options?)

directory

Type: string
Default: __dirname

Directory to import modules from. Unless you’ve set the fileExtensions option, that means any .js, .json, .node files, in that order. Does not recurse. Ignores the caller file and files starting with . or _.

options

Type: object

camelize

Type: boolean
Default: true

Convert dash-style names (foo-bar) and snake-style names (foo_bar) to camel-case (fooBar).

fileExtensions

Type: string[]
Default: ['.js', '.json', '.node']

File extensions to look for. Order matters.



is-stream Build Status

Check if something is a Node.js stream

Install

npm install is-stream

Usage

API

isStream(stream)

Returns a boolean for whether it’s a Stream.

isStream.writable(stream)

Returns a boolean for whether it’s a stream.Writable.

isStream.readable(stream)

Returns a boolean for whether it’s a stream.Readable.

isStream.duplex(stream)

Returns a boolean for whether it’s a stream.Duplex.

isStream.transform(stream)

Returns a boolean for whether it’s a stream.Transform.



globals Build Status

Global identifiers from different JavaScript environments

Extracted from JSHint and ESLint and merged.

It’s just a JSON file, so use it in whatever environment you like.

This module no longer accepts new environments. If you need it for ESLint, just create a plugin.

Install

npm install globals

Usage

Each global is given a value of true or false. A value of true indicates that the variable may be overwritten. A value of false indicates that the variable should be considered read-only. This information is used by static analysis tools to flag incorrect behavior. We assume all variables should be false unless we hear otherwise.



p-limit Build Status

Run multiple promise-returning & async functions with limited concurrency

Install

npm install p-limit

Usage

API

pLimit(concurrency)

Returns a limit function.

concurrency

Type: number
Minimum: 1

Concurrency limit.

limit(fn)

Returns the promise returned by calling fn.

fn

Type: Function

Promise-returning/async function.

  • p-queue - Promise queue with concurrency control
  • p-throttle - Throttle promise-returning & async functions
  • p-debounce - Debounce promise-returning & async functions
  • p-all - Run promise-returning & async functions concurrently with optional limited concurrency
  • More…


p-limit Build Status

Run multiple promise-returning & async functions with limited concurrency

Install

npm install p-limit

Usage

API

pLimit(concurrency)

Returns a limit function.

concurrency

Type: number
Minimum: 1

Concurrency limit.

limit(fn)

Returns the promise returned by calling fn.

fn

Type: Function

Promise-returning/async function.

  • p-queue - Promise queue with concurrency control
  • p-throttle - Throttle promise-returning & async functions
  • p-debounce - Debounce promise-returning & async functions
  • p-all - Run promise-returning & async functions concurrently with optional limited concurrency
  • More…


stubs

It’s a simple stubber.

About

For when you don’t want to write the same thing over and over to cache a method and call an override, then revert it, and blah blah.

Use

API

stubs(object, method[[, opts], stub])

object

  • Type: Object

method

  • Type: String

Name of the method to stub.

opts

  • (optional)
  • Type: Object
opts.callthrough
  • (optional)
  • Type: Boolean
  • Default: false

Call the original method as well as the stub (if a stub is provided).

opts.calls
  • (optional)
  • Type: Number
  • Default: 0 (never revert)

Number of calls to allow the stub to receive until reverting to the original.

stub

  • (optional)
  • Type: Function
  • Default: function() {}

This method is called in place of the original method. If opts.callthrough is true, this method is called after the original method is called as well. # path-exists Build Status

Check if a path exists

NOTE: fs.existsSync has been un-deprecated in Node.js since 6.8.0. If you only need to check synchronously, this module is not needed.

While fs.exists() is being deprecated, there’s still a genuine use-case of being able to check if a path exists for other purposes than doing IO with it.

Never use this before handling a file though:

In particular, checking if a file exists before opening it is an anti-pattern that leaves you vulnerable to race conditions: another process may remove the file between the calls to fs.exists() and fs.open(). Just open the file and handle the error when it’s not there.

Install

npm install path-exists

Usage

API

pathExists(path)

Returns a Promise<boolean> of whether the path exists.

pathExists.sync(path)

Returns a boolean of whether the path exists.



get-stdin Build Status

Get stdin as a string or buffer

Install

npm install get-stdin

Usage

$ echo unicorns | node example.js
unicorns

API

Both methods returns a promise that is resolved when the end event fires on the stdin stream, indicating that there is no more data to be read.

getStdin()

Get stdin as a string.

In a TTY context, a promise that resolves to an empty string is returned.

getStdin.buffer()

Get stdin as a Buffer.

In a TTY context, a promise that resolves to an empty Buffer is returned.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-get-stdin?utm_source=npm-get-stdin&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


cli-spinners Build Status

70+ spinners for use in the terminal




The list of spinners is just a JSON file and can be used wherever.

You probably want to use one of these spinners through the ora module.

Install

npm install cli-spinners

Usage

Preview

The header GIF is outdated. See all the spinner at once or one at the time.

API

Each spinner comes with a recommended interval and an array of frames.

See the spinners.

The random spinner will return a random spinner each time it’s called.



map-obj Build Status

Map object keys and values into a new object

Install

npm install map-obj

Usage

API

mapObject(source, mapper, options?)

source

Type: object

Source object to copy properties from.

mapper

Type: Function

Mapping function.

  • It has signature mapper(sourceKey, sourceValue, source).
  • It must return a two item array: [targetKey, targetValue].

options

Type: object

deep

Type: boolean
Default: false

Recurse nested objects and objects in arrays.

target

Type: object
Default: {}

Target object to map properties on to.

  • filter-obj - Filter object keys and values into a new object

<b>
    <a href="https://tidelift.com/subscription/pkg/npm-map-obj?utm_source=npm-map-obj&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


import-fresh

Import a module while bypassing the cache

Useful for testing purposes when you need to freshly import a module.

Install

npm install import-fresh

Usage

import-fresh for enterprise

Available as part of the Tidelift Subscription.

The maintainers of import-fresh and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.



parent-module Build Status

Get the path of the parent module

Node.js exposes module.parent, but it only gives you the first cached parent, which is not necessarily the actual parent.

Install

npm install parent-module

Usage

API

parentModule(filepath)

By default, it will return the path of the immediate parent.

filepath

Type: string
Default: __filename

Filepath of the module of which to get the parent path.

Useful if you want it to work multiple module levels down.

Tip

Combine it with read-pkg-up to read the package.json of the parent module.



crypto-random-string Build Status

Generate a cryptographically strong random string

Can be useful for creating an identifier, slug, salt, fixture, etc.

Install

npm install crypto-random-string

Usage

API

cryptoRandomString(length)

Returns a hex string.

length

Type: number

Length of the returned string.



map-age-cleaner

Build Status codecov

Automatically cleanup expired items in a Map

Install

npm install map-age-cleaner

Usage

Note: Items have to be ordered ascending based on the expiry property. This means that the item which will be expired first, should be in the first position of the Map.

API

mapAgeCleaner(map, property)

Returns the Map instance.

map

Type: Map

Map instance which should be cleaned up.

property

Type: string
Default: maxAge

Name of the property which olds the expiry timestamp.

  • expiry-map - A Map implementation with expirable items
  • expiry-set - A Set implementation with expirable keys
  • mem - Memoize functions


dir-glob Build Status

Convert directories to glob compatible strings

Install

npm install dir-glob

Usage

API

dirGlob(input, options)

Returns a Promise for an array of glob strings.

dirGlob.sync(input, options)

Returns an array of glob strings.

input

Type: Array string

A string or an Array of paths.

options

extensions

Type: Array

Append extensions to the end of your globs.

files

Type: Array

Only glob for certain files.

cwd

Type: string

Test in specific directory.



strip-json-comments Build Status

Strip comments from JSON. Lets you use comments in your JSON files!

This is now possible:

It will replace single-line comments // and multi-line comments /**/ with whitespace. This allows JSON error positions to remain as close as possible to the original source.

Also available as a gulp/grunt/broccoli plugin.

Install

npm install --save strip-json-comments

Usage

API

stripJsonComments(input, options)

input

Type: string

Accepts a string with JSON and returns a string without comments.

options

whitespace

Type: boolean
Default: true

Replace comments with whitespace instead of stripping them entirely.



log-symbols Build Status

Colored symbols for various log levels

Includes fallbacks for Windows CMD which only supports a limited character set.

Install

npm install log-symbols

Usage

API

logSymbols

info

success

warning

error


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-log-symbols?utm_source=npm-log-symbols&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


currently-unhandled Build Status Coverage Status

Track the list of currently unhandled promise rejections.

Install

npm install --save currently-unhandled

Usage

API

currentlyUnhandled()

Returns an array of objects with promise and reason properties representing the rejected promises that currently do not have a rejection handler. The list grows and shrinks as unhandledRejections are published, and later handled.

This module can be bundled with browserify. At time of writing, it will work with native Promises in the Chrome browser only. For best cross-browser support, use bluebird instead of native Promise support in browsers.



object-assign Build Status

ES2015 Object.assign() ponyfill

Use the built-in

Node.js 4 and up, as well as every evergreen browser (Chrome, Edge, Firefox, Opera, Safari), support Object.assign() :tada:. If you target only those environments, then by all means, use Object.assign() instead of this package.

Install

npm install --save object-assign

Usage

API

objectAssign(target, [source, …])

Assigns enumerable own properties of source objects to the target object and returns the target object. Additional source objects will overwrite previous ones.

Resources



clean-yaml-object Build Status Coverage Status

Clean up an object prior to serialization.

Replaces circular references, pretty prints Buffers, and numerous other enhancements. Primarily designed to prepare Errors for serialization to JSON/YAML.

Extracted from node-tap

Install

npm install --save clean-yaml-object

Usage

API

cleanYamlObject(input, filterFn)

Returns a deep copy of input that is suitable for serialization.

input

Type: *

Any object.

filterFn

Type: callback(propertyName, isRoot, source, target)

Optional filter callback. Returning true will cause the property to be copied. Otherwise it will be skipped

  • propertyName: The property being copied.
  • isRoot: true only if source is the top level object passed to copyYamlObject
  • source: The source from which source[propertyName] will be copied.
  • target: The target object.


is-path-inside Build Status

Check if a path is inside another path

Install

npm install is-path-inside

Usage

API

isPathInside(childPath, parentPath)

Note that relative paths are resolved against process.cwd() to make them absolute.

Important: This package is meant for use with path manipulation. It does not check if the paths exist nor does it resolve symlinks. You should not use this as a security mechanism to guard against access to certain places on the file system.

childPath

Type: string

The path that should be inside parentPath.

parentPath

Type: string

The path that should contain childPath.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-is-path-inside?utm_source=npm-is-path-inside&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


locate-path Build Status

Get the first path that exists on disk of multiple paths

Install

npm install locate-path

Usage

Here we find the first file that exists on disk, in array order.

API

locatePath(input, options)

Returns a Promise for the first path that exists or undefined if none exists.

input

Type: Iterable<string>

Paths to check.

options

Type: Object

concurrency

Type: number
Default: Infinity
Minimum: 1

Number of concurrently pending promises.

preserveOrder

Type: boolean
Default: true

Preserve input order when searching.

Disable this to improve performance if you don’t care about the order.

cwd

Type: string
Default: process.cwd()

Current working directory.

locatePath.sync(input, options)

Returns the first path that exists or undefined if none exists.

input

Type: Iterable<string>

Paths to check.

options

Type: Object

cwd

Same as above.



string-width Build Status

Get the visual width of a string - the number of columns required to display it

Some Unicode characters are fullwidth and use double the normal width. ANSI escape codes are stripped and doesn’t affect the width.

Useful to be able to measure the actual width of command-line output.

Install

npm install string-width

Usage


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-string-width?utm_source=npm-string-width&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


stream-events

Get an event when you’re being sent data or asked for it.

About

This is just a simple thing that tells you when _read and _write have been called, saving you the trouble of writing this yourself. You receive two events reading and writing– no magic is performed.

This works well with duplexify or lazy streams, so you can wait until you know you’re being used as a stream to do something asynchronous, such as fetching an API token.

Use

Using with Duplexify



locate-path Build Status

Get the first path that exists on disk of multiple paths

Install

npm install --save locate-path

Usage

Here we find the first file that exists on disk, in array order.

API

locatePath(input, options)

Returns a Promise for the first path that exists or undefined if none exists.

input

Type: Iterable<string>

Paths to check.

options

Type: Object

concurrency

Type: number
Default: Infinity
Minimum: 1

Number of concurrently pending promises.

preserveOrder

Type: boolean
Default: true

Preserve input order when searching.

Disable this to improve performance if you don’t care about the order.

cwd

Type: string
Default: process.cwd()

Current working directory.

locatePath.sync(input, options)

Returns the first path that exists or undefined if none exists.

input

Type: Iterable<string>

Paths to check.

options

Type: Object

cwd

Same as above.



locate-path Build Status

Get the first path that exists on disk of multiple paths

Install

npm install --save locate-path

Usage

Here we find the first file that exists on disk, in array order.

API

locatePath(input, options)

Returns a Promise for the first path that exists or undefined if none exists.

input

Type: Iterable<string>

Paths to check.

options

Type: Object

concurrency

Type: number
Default: Infinity
Minimum: 1

Number of concurrently pending promises.

preserveOrder

Type: boolean
Default: true

Preserve input order when searching.

Disable this to improve performance if you don’t care about the order.

cwd

Type: string
Default: process.cwd()

Current working directory.

locatePath.sync(input, options)

Returns the first path that exists or undefined if none exists.

input

Type: Iterable<string>

Paths to check.

options

Type: Object

cwd

Same as above.



eslint-config-xo-typescript Build Status

ESLint shareable config for TypeScript to be used with eslint-config-xo

Install

npm install --save-dev eslint-config-xo eslint-config-xo-typescript @typescript-eslint/parser @typescript-eslint/eslint-plugin

Usage with XO

XO has built-in support for TypeScript, using this package under the hood, so you do not have to configure anything.

Standalone Usage

Add some ESLint config to your package.json (or .eslintrc):

Use the space sub-config if you want 2 space indentation instead of tabs:

Note: If your tsconfig.json is not in the same directory as package.json, you will have to set the path yourself:



clean-stack Build Status

Clean up error stack traces

Removes the mostly unhelpful internal Node.js entries.

Also works in Electron.

Install

npm install clean-stack

Usage

API

cleanStack(stack, options)

stack

Type: string

The stack property of an Error.

options

Type: Object

pretty

Type: boolean
Default: false

Prettify the file paths in the stack:

/Users/sindresorhus/dev/clean-stack/unicorn.js:2:15~/dev/clean-stack/unicorn.js:2:15



xdg-basedir Build Status

Get XDG Base Directory paths

Install

npm install xdg-basedir

Usage

API

The properties .data, .config, .cache, .runtime will return null in the uncommon case that both the XDG environment variable is not set and the users home directory can’t be found. You need to handle this case. A common solution is to fall back to a temp directory.

.data

Directory for user-specific data files.

.config

Directory for user-specific configuration files.

.cache

Directory for user-specific non-essential data files.

.runtime

Directory for user-specific non-essential runtime files and other file objects (such as sockets, named pipes, etc).

.dataDirs

Preference-ordered array of base directories to search for data files in addition to .data.

.configDirs

Preference-ordered array of base directories to search for configuration files in addition to .config.



cli-boxes Build Status

Boxes for use in the terminal

The list of boxes is just a JSON file and can be used anywhere.

Install

npm install cli-boxes

Usage

API

cliBoxes

single

┌────┐
│    │
└────┘

double

╔════╗
║    ║
╚════╝

round

╭────╮
│    │
╰────╯

bold

┏━━━━┓
┃    ┃
┗━━━━┛

singleDouble

╓────╖
║    ║
╙────╜

doubleSingle

╒════╕
│    │
╘════╛

classic

+----+
|    |
+----+
  • boxen - Create boxes in the terminal

<b>
    <a href="https://tidelift.com/subscription/pkg/npm-cli-boxes?utm_source=npm-cli-boxes&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


strip-ansi Build Status

Strip ANSI escape codes from a string

Install

npm install strip-ansi

Usage

strip-ansi for enterprise

Available as part of the Tidelift Subscription.

The maintainers of strip-ansi and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.

Maintainers



has-flag Build Status

Check if argv has a specific flag

Correctly stops looking after an -- argument terminator.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-has-flag?utm_source=npm-has-flag&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>

Install

npm install has-flag

Usage

$ node foo.js -f --unicorn --foo=bar -- --rainbow

API

hasFlag(flag, argv)

Returns a boolean for whether the flag exists.

flag

Type: string

CLI flag to look for. The -- prefix is optional.

argv

Type: string[]
Default: process.argv

CLI arguments.

Security

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure.



decode-uri-component

Build Status Coverage Status

A better decodeURIComponent

Why?

  • Decodes + to a space.
  • Converts the BOM to a replacement character .
  • Does not throw with invalid encoded input.
  • Decodes as much of the string as possible.

Install

npm install --save decode-uri-component

Usage

API

decodeUriComponent(encodedURI)

encodedURI

Type: string

An encoded component of a Uniform Resource Identifier.



resolve-cwd Build Status

Resolve the path of a module like require.resolve() but from the current working directory

Install

npm install resolve-cwd

Usage

API

resolveCwd(moduleId)

Like require(), throws when the module can’t be found.

resolveCwd.silent(moduleId)

Returns undefined instead of throwing when the module can’t be found.

moduleId

Type: string

What you would use in require().

  • resolve-from - Resolve the path of a module from a given path
  • import-from - Import a module from a given path
  • import-cwd - Import a module from the current working directory
  • resolve-pkg - Resolve the path of a package regardless of it having an entry point
  • import-lazy - Import a module lazily
  • resolve-global - Resolve the path of a globally installed module


hash-stream-validation

Hash a stream of data, then validate

Do this for faster crc32c computation

If the speeds are too slow for your use, this module will try to require fast-crc32c. We chose not to make it an optionalDependency because npm’s scary warning output confuses users into thinking their hard drive was just erased.

Use Case

After a successful upload to a Google Cloud Storage bucket, the API will respond with the hash of data it has received. During our upload, we can run the data through this module, then confirm after the upload if we both arrived at the same results. If not, we know something went wrong during the transmission.

API

validateStream = hashStreamValidation(opts)

opts.crc32c

  • Type: Boolean
  • Default: true

Enable crc32c hashing via sse4_crc32.*

  • Note: Any issues installing this module on your system should be opened at their repository.

opts.md5

  • Type: Boolean
  • Default: true

Enable MD5 hashing.

validateStream.test(algo, sum)

algo

  • Type: String

The alogrithm to test the sum against (‘crc32c’ or ‘md5’).

sum

  • Type: String

The base64-encoded sum to validate.



env-editor Build Status

Get metadata on the default editor or a specific editor

This module is used by open-editor.

  • Sublime Text
  • Atom
  • Visual Studio Code
  • WebStorm
  • TextMate
  • Vim
  • NeoVim
  • IntelliJ
  • GNU nano
  • GNU Emacs

Install

npm install env-editor

Usage

API

defaultEditor()

Returns metadata on the default editor.

The user is expected to have the $EDITOR environment variable set, and if not, a user-friendly error is thrown.

getEditor(editor)

Returns metadata on the specified editor.

editor

Type: string

This can be pretty flexible. It matches against all the data it has.

For example, to get Sublime Text, you could write either of the following: sublime, Sublime Text, subl.

allEditors()

Returns an array with metadata on all the editors.




SUPERTAP


Build Status

Generate TAP output

Install

npm install supertap

Usage

Output:

TAP version 13
# passing
ok 1 - passing

1..1
# tests 1
# pass 1
# fail 0

API

start()

Always returns 'TAP version 13' string.

test(title, options)

title

Type: string

Test title.

options

index

Type: number

Index of the test. Should start with one, not zero.

passed

Type: boolean
Default: false

Status of the test.

error

Type: Error

If test has failed (passed is false), error is an instance of an actual error.

todo
skip

Type: boolean
Default: false

Mark test as to-do or as skipped.

comment

Type: string array

Comments for that test.

finish(stats)

stats

passed
failed
skipped
todo
crashed

Type: number
Default: 0

Number of tests that passed, failed, skipped or marked as todo. crashed is a special option, which adds to failed test count in the output, but not total test count. AVA uses it to count unhandled exceptions.



read-pkg Build Status

Read a package.json file

Why

Install

npm install --save read-pkg

Usage

API

readPkg(path, options)

Returns a Promise for the parsed JSON.

readPkg.sync(path, options)

Returns the parsed JSON.

path

Type: string
Default: .

Path to a package.json file or its directory.

options

normalize

Type: boolean
Default: true

Normalize the package data.



strip-ansi Build Status

Strip ANSI escape codes from a string


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-strip-ansi?utm_source=npm-strip-ansi&utm_medium=referral&utm_campaign=readme">Get professional support for 'strip-ansi' with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>

Install

npm install strip-ansi

Usage

Security

To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure.

Maintainers



micro-spelling-correcter Build Status codecov

Simple breadth-first early terminating Levenshtein distance auto correcter for small sets of possible resulting strings.

Finds first suiting correction for word if there is one with distance less or equal than target maximum distance and returns it, otherwise returns undefined.

Additionally, applies a simple heuristic of limiting max distance to half input length rounded down but not lessser than one, which helps to escape corrections which feel weird in real life(like ‘a’ => ‘is’, ‘foo’ => ‘log’ with distance 2).

Details: - Cost of every edit is counted as 1, though for every analyzed distance search tries skips then replacements then transpositions then additions. - Checks if word is in target word set at start and just returns the word if it is(so you don’t need to check it yourself).

Example

npm install micro-spelling-correcter



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



ms

Build Status Slack Channel

Use this package to easily convert various time formats to milliseconds.

Examples

Convert from milliseconds

Time format written-out

Features

  • Works both in node and in the browser.
  • If a number is supplied to ms, a string with a unit is returned.
  • If a string that contains the number is supplied, it returns it as a number (e.g.: it returns 100 for '100').
  • If you pass a string with a number and a valid unit, the number of equivalent ms is returned.

Caught a bug?

  1. Fork this repository to your own GitHub account and then clone it to your local device
  2. Link the package to the global module directory: npm link
  3. Within the module you want to test your local development instance of ms, just link it to the dependencies: npm link ms. Instead of the default one from npm, node will now use your clone of ms!

As always, you can run the tests using: npm test



global-dirs

Get the directory of globally installed packages and binaries

Uses the same resolution logic as npm and yarn.

Install

npm install global-dirs

Usage

API

globalDirectories

npm

yarn

packages

Directory with globally installed packages.

Equivalent to npm root --global.

binaries

Directory with globally installed binaries.

Equivalent to npm bin --global.

prefix

Directory with directories for packages and binaries. You probably want either of the above.

Equivalent to npm prefix --global.


<b>
    <a href="https://tidelift.com/subscription/pkg/npm-global-dirs?utm_source=npm-global-dirs&utm_medium=referral&utm_campaign=readme">Get professional support for this package with a Tidelift subscription</a>
</b>
<br>
<sub>
    Tidelift helps make open source sustainable for maintainers while giving companies<br>assurances about security, maintenance, and licensing for their dependencies.
</sub>


ansi-regex Build Status

Regular expression for matching ANSI escape codes

Install

npm install ansi-regex

Usage

FAQ

Why do you test for codes not in the ECMA 48 standard?

Some of the codes we run as a test are codes that we acquired finding various lists of non-standard or manufacturer specific codes. We test for both standard and non-standard codes, as most of them follow the same or similar format and can be safely matched in strings without the risk of removing actual string content. There are a few non-standard control codes that do not follow the traditional format (i.e. they end in numbers) thus forcing us to exclude them from the test because we cannot reliably match them.

On the historical side, those ECMA standards were established in the early 90’s whereas the VT100, for example, was designed in the mid/late 70’s. At that point in time, control codes were still pretty ungoverned and engineers used them for a multitude of things, namely to activate hardware ports that may have been proprietary. Somewhere else you see a similar ‘anarchy’ of codes is in the x86 architecture for processors; there are a ton of “interrupts” that can mean different things on certain brands of processors, most of which have been phased out.

Maintainers



eslint-formatter-pretty Build Status

Pretty formatter for ESLint

Highlights

  • Pretty output.
  • Sorts results by severity.
  • Stylizes inline codeblocks in messages.
  • Command-click a rule ID to open its docs.
  • Command-click a header to reveal the first error in your editor. (iTerm-only)

Install

npm install --save-dev eslint-formatter-pretty

Usage

XO

Nothing to do. It’s the default formatter.

ESLint CLI

$ eslint --format=pretty file.js

grunt-eslint

gulp-eslint

eslint-loader (webpack)

Tips

In iTerm, Command-click the filename header to open the file in your editor.

In terminals with support for hyperlinks, Command-click the rule ID to open its docs.



read-pkg-up Build Status

Read the closest package.json file

Why

Install

npm install --save read-pkg-up

Usage

API

readPkgUp(options)

Returns a Promise for the result object.

readPkgUp.sync(options)

Returns the result object.

options

cwd

Type: string
Default: .

Directory to start looking for a package.json file.

normalize

Type: boolean
Default: true

Normalize the package data.

  • read-pkg - Read a package.json file
  • pkg-up - Find the closest package.json file
  • find-up - Find a file by walking up parent directories
  • pkg-conf - Get namespaced config from the closest package.json


aggregate-error Build Status

Create an error from multiple errors

Install

npm install aggregate-error

Usage

API

AggregateError(errors)

Returns an Error that is also an Iterable for the individual errors.

errors

Type: Array<Error|Object|string>

If a string, a new Error is created with the string as the error message.
If a non-Error object, a new Error is created with all properties from the object copied over.



find-up Build Status: Linux and macOS Build Status: Windows

Find a file by walking up parent directories

Install

npm install --save find-up

Usage

/
└── Users
        └── sindresorhus
                ├── unicorn.png
                └── foo
                        └── bar
                                ├── baz
                                └── example.js

API

findUp(filename, options)

Returns a Promise for the filepath or null.

findUp([filenameA, filenameB], options)

Returns a Promise for the first filepath found (by respecting the order) or null.

findUp.sync(filename, options)

Returns a filepath or null.

findUp.sync([filenameA, filenameB], options)

Returns the first filepath found (by respecting the order) or null.

filename

Type: string

Filename of the file to find.

options

cwd

Type: string
Default: process.cwd()

Directory to start from.

  • find-up-cli - CLI for this module
  • pkg-up - Find the closest package.json file
  • pkg-dir - Find the root directory of an npm package


find-up